tactics in Constant 2016


Constant
Mondotheque: A Radiated Book
2016


P.1

Mondotheque::a
radiated
book/
un
livre
irradiant/
een
irradiërend
boek

P.2

P.3

Index
• Mondotheque::a radiated book/un livre irradiant/een
irradiërend boek
◦ Property:Person (agents + actors)
◦ EN Introduction
◦ FR Préface
◦ NL Inleiding
• Embedded hierarchies
◦ FR+NL+EN A radiating interview/Un entrevue irradiant/Een irradiërend gesprek
◦ EN Amateur Librarian - A Course in Critical Pedagogy TOMISLAV MEDAK &
MARCELL MARS (Public Library project)
◦ FR Bibliothécaire amateur - un cours de pédagogie critique TOMISLAV MEDAK
& MARCELL MARS







EN

A bag but is language nothing of words MICHAEL MURTAUGH
A Book of the Web DUSAN BAROK
EN
The Indexalist MATTHEW FULLER
NL
De Indexalist MATTHEW FULLER
FR
Une lecture-écriture du livre sur le livre ALEXIA DE VISSCHER
EN

• Disambiguation
◦ EN An experimental transcript SÎNZIANA PĂLTINEANU
◦ EN+FR LES UTOPISTES and their common logos/et leurs logos communs
DENNIS POHL





EN

X = Y DICK RECKARD
Madame C/Mevrouw C FEMKE SNELTING
EN
A Pre-emptive History of the Google Cultural Institute GERALDINE
EN+NL

JUÁREZ




FR
EN

Une histoire préventive du Google Cultural Institute GERALDINE JUÁREZ
Special:Disambiguation

• Location, location, location
◦ EN From Paper Mill to Google Data Center SHINJOUNG YEO
◦ EN House, City, World, Nation, Globe NATACHA ROUSSEL
◦ EN The Smart City - City of Knowledge DENNIS POHL
◦ FR La ville intelligente - Ville de la connaissance DENNIS POHL
◦ EN The Itinerant Archive
• Cross-readings
◦ EN Les Pyramides
◦ EN Transclusionism
◦ EN Reading list
◦ FR+EN+NL Colophon/Colofon
Last
Revision:
2·08·2016

P.4

P.5

Property:Person
Meet the cast of historical, contemporary and fictional people that populate La
Mondotheque.

Unknown man,Andrew
Warden Boyd Carnegie
Rayward, Françoise
Levie, Alex Wright

André CanonneArni Jonsson , Barack ObamaBernard Otlet Bernard Otlet, Bernard Otlet, Bill Echikson
Sauli Niinistö
Patrick
Patrick
Lafontaine Lafontaine

Bill Echikson, Delphine JenartDelphine Jenart,
Elio Di Rupo Unknown man,Elio Di Rupo, Elio Di Rupo, Elio Di Rupo, Sylvia Van
Delphine Jenart
Nooka Kiili ,
Elio Di Rupo, Sylvia Van Sylvia Van Thierry GeertsPeteghem, Elio
Joyce Proot
Roi Albert II, Peteghem
Peteghem
Di Rupo, JeanJean-Claude
Paul Deplus
Marcourt

Elio Di Rupo, Elio Di Rupo, Elio Di Rupo, Elio Di Rupo Alexander De Elio Di Rupo, Nicolas Sarkozy,
Eric E. SchmidtErnest de Potter
Thierry Geerts,Guy Quaden , Rudy Demotte
Croo, Elio Di Unknown man,Eric E. Schmidt
Unknown man Yves Vasseur
Rupo
Roi Albert II,
Jean-Claude
Marcourt

Evgeny
Rodionov

P.6

Stéphanie
Alexia de Visscher,
Femke Snelting,Robert M. Nicolas Malevé,
Stéphanie
Stéphanie
François
Manfroid, Femke
Michael Murtaugh,
Dennis Pohl, Ochshorn, JanMichael
Manfroid, Femke
Manfroid, Femke
Schuiten
Snelting, Dick Femke Snelting,Alexia de
Gerber , FemkeMurtaugh, Alexia
Snelting, Natacha
Snelting, Natacha
Reckard
Sînziana
Visscher, Andre
Snelting, Marcell
de Visscher, Roussel, Dick Roussel, Dick
Castro
Mars, Sebastian
Femke Snelting,Reckard
Reckard
Păltineanu, Nicolas
Luetgert , Donatella
Sînziana
Malevé
Portoghese Păltineanu

P.7

Gustave AbeelsHarm Post

Henri La
Fontaine

Henri La
Fontaine

Henri La
Fontaine

Mathilde Lhoest,
Henri La
Henri La
Fontaine
Fontaine

Igor PlatounoffWilhelmina
Coops, Igor
Platounoff

Annie Besant, Jean François Jean Otlet Jr. Bill Echikson, Jean-Paul Deplus
Annie Besant, Louis Masure,Unidentified Woman,
Marcel Flamion
Jean Delville Fueg
Jean-Paul Deplus
Jiddu
Mademoiselle Poels,
Mademoiselle Poels
Krishnamurti Mademoiselle de
Bauche

Marie-Louise Paul Otlet, Paul Otlet
Philips
Madame Taupin
, Pierre
Bourgeois

Paul Otlet

Wilhelmina Paul Otlet
Coops, Paul
Otlet

Marie Van Paul Otlet, Cato
Paul Otlet, Cato
Mons , Paul van Nederhasselt
van Nederhasselt
Otlet

Unidentified Wilhelmina Paul Otlet
Woman, Paul Coops, Paul
Otlet
Otlet

Paul Otlet

Jiddu Krishnamurti
Paul Otlet
, Paul Otlet, Jean
Delville

Unidentified Paul Otlet
Woman, Paul
Otlet, Georges
Lorphèvre

P.8

Paul Otlet

P.9

Cato van
Le Corbusier, Paul Otlet,
Nederhasselt, Paul
Paul Otlet, Georges
Otlet
Hélène de
Lorphèvre
Mandrot

Unidentified Paul Otlet, Henri
Paul Otlet
Woman, Jean La Fontaine,
Delville, Paul Mathilde Lhoest
Otlet, Henri La
Fontaine

Unidentified Unidentified Paul Otlet, Unidentified Paul Otlet,
Woman, Paul Woman, Paul Mathilde La Woman, W.E.B.
Unidentified
Otlet
Otlet, GeorgesFontaine , Henri
Du Bois, Paul Woman
Lorphèvre
La Fontaine Otlet, Henri La
Fontaine, Jean
Delville

Paul Panda, Unidentified Paul Otlet
Unidentified Woman, Paul
Woman, HenriOtlet
La
Fontaine, Cato van
Nederhasselt, Paul
Otlet, W.E.B. Du
Bois, Blaise Diagne
, Mathilde Lhoest

Unidentified Woman,
Sebastien
Paul Otlet, Cato
Delneste
van
Nederhasselt, Georges
Lorphèvre, André
Colet, Thea Coops,
Broese van Groenou

Steve Crossan Stéphanie
Manfroid

Sylvia Van
Peteghem

Thea Coops Unidentified Unidentified Unidentified Unidentified Unidentified Unidentified Unidentified
Woman
Woman
Woman
Woman, LouisWoman
Woman, LouisWoman
Masure
Masure

Unidentified Unidentified Unidentified Unidentified Unidentified Unidentified Vint Cerf, Chris
Vint Cerf
Woman
Woman
Woman
Woman
Woman
Woman
Burns

P.10

Vint Cerf

P.11

Wilhelmina
Coops

Wilhelmina
Coops

Wilhelmina
Coops

Wilhelmina
Coops

Yves Bernard

Introduction
This Radiated Book started three years ago with an e-mail from the Mundaneum archive
center in Mons. It announced that Elio di Rupo, then prime minister of Belgium, was about
to sign a collaboration agreement between the archive center and Google. The newsletter
cited an article in the French newspaper Le Monde that coined the Mundaneum as 'Google
on paper' [1]. It was our first encounter with many variations on the same theme.
The former mining area around Mons is also where Google has installed its largest
datacenter in Europe, a result of negotiations by the same Di Rupo[2]. Due to the re-branding
of Paul Otlet as ‘founding father of the Internet’, Otlet's oeuvre finally started to receive
international attention. Local politicians wanting to transform the industrial heartland into a
home for The Internet Age seized the moment and made the Mundaneum a central node in
their campaigns. Google — grateful for discovering its posthumous francophone roots — sent
chief evangelist Vint Cerf to the Mundaneum. Meanwhile, the archive center allowed the
company to publish hundreds of documents on the website of Google Cultural Institute.
While the visual resemblance between a row of index drawers and a server park might not
be a coincidence, it is something else to conflate the type of universalist knowledge project
imagined by Paul Otlet and Henri Lafontaine with the enterprise of the search giant. The
statement 'Google on paper' acted as a provocation, evoking other cases in other places
where geographically situated histories are turned into advertising slogans, and cultural
infrastructures pushed into the hands of global corporations.
An international band of artists, archivists and activists set out to unravel the many layers of
this mesh. The direct comparison between the historical Mundaneum project and the mission
of Alphabet Inc[3] speaks of manipulative simplification on multiple levels, but to de-tangle its
implications was easier said than done. Some of us were drawn in by misrepresentations of
the oeuvre of Otlet himself, others felt the need to give an account of its Brussels' roots, to reinsert the work of maintenance and caretaking into the his/story of founding fathers, or joined
out of concern with the future of cultural institutions and libraries in digital times.
We installed a Semantic MediaWiki and named it after the Mondotheque, a device
imagined by Paul Otlet in 1934. The wiki functioned as an online repository and frame of
reference for the work that was developed through meetings, visits and presentations[4]. For
Otlet, the Mondotheque was to be an 'intellectual machine': at the same time archive, link
generator, writing desk, catalog and broadcast station. Thinking the museum, the library, the
encyclopedia, and classificatory language as a complex and interdependent web of relations,
Otlet imagined each element as a point of entry for the other. He stressed that responses to

P.12

P.13

displays in a museum involved intellectual and social processes that where different from
those involved in reading books in a library, but that one in a sense entailed the other. [5]. The
dreamed capacity of his Mondotheque was to interface scales, perspectives and media at the
intersection of all those different practices. For us, by transporting a historical device into the
future, it figured as a kind of thinking machine, a place to analyse historical and social
locations of the Mundaneum project, a platform to envision our persistent interventions
together. The speculative figure of Mondotheque enabled us to begin to understand the
situated formations of power around the project, and allowed us to think through possible
forms of resistance. [6]
The wiki at http://mondotheque.be grew into a labyrinth of images, texts, maps and semantic
links, tools and vocabularies. MediaWiki is a Free software infrastructure developed in the
context of Wikipedia and comes with many assumptions about the kind of connections and
practices that are desirable. We wanted to work with Semantic extensions specifically
because we were interested in the way The Semantic Web[7] seemed to resemble Otlet's
Universal Decimal Classification system. At many moments we felt ourselves going down
rabbit-holes of universal completeness, endless categorisation and nauseas of scale. It made
the work at times uncomfortable, messy and unruly, but it allowed us to do the work of
unravelling in public, mixing political urgency with poetic experiments.
This Radiated Book was made because we wanted to create a moment, an incision into that
radiating process that allowed us to invite many others a look at the interrelated materials
without the need to provide a conclusive document. As a salute to Otlet's ever expanding
Radiated Library, we decided to use the MediaWiki installation to write, edit and generate
the publication which explains some of the welcome anomalies on the very pages of this
book.
The four chapters that we propose each mix fact and fiction, text and image, document and
catalogue. In this way, process and content are playing together and respond to the specific
material entanglements that we encountered. Mondotheque, and as a consequence this
Radiated book, is a multi-threaded, durational, multi-scalar adventure that in some way
diffracts the all-encompassing ambition that the 19th century Utopia of Mundaneum stood
for.
Embedded hierarchies addresses how classification systems, and the dream of their universal
application actually operate. It brings together contributions that are concerned with
knowledge infrastructures at different scales, from disobedient libraries, institutional practices
of the digital archive, meta-data structures to indexing as a pathological condition.
Disambiguation dis-entangles some of the similarities that appear around the heritage of Paul
Otlet. Through a close-reading of seemingly similar biographies, terms and vocabularies it relocates ambiguity to other places.

Location, location, location is an account of geo-political layers at work. Following the
itinerant archive of Mundaneum through the capital of Europe, we encounter local, national
and global Utopias that in turn leave their imprint on the way the stories play out. From the
hyperlocal to the global, this chapter traces patterns in the physical landscape.
Cross-readings consists of lists, image collections and other materials that make connections
emerge between historical and contemporary readings, unearthing possible spiritual or
mystical underpinnings of the Mundaneum, and transversal inclusions of the same elements in
between different locations.
The point of modest operations such as Mondotheque is to build the collective courage to
persist in demanding access to both the documents and the intellectual and technological
infrastructures that interface and mediate them. Exactly because of the urgency of the
situation, where the erosion of public institutions has become evident, and all forms of
communication seem to feed into neo-liberal agendas eventually, we should resist
simplifications and find the patience to build a relation to these histories in ways that makes
sense. It is necessary to go beyond the current techno-determinist paradigm of knowledge
production, and for this, imagination is indispensable.

Paul Otlet, design for Mondotheque (Mundaneum archive center, Mons)
Last
Revision:
2·08·2016

1. Jean-Michel Djian, Le Mundaneum, Google de papier, Le Monde Magazine, 19 december 2009

P.14

P.15

2. « À plusieurs
reprises, on a eu chaud, parce qu’il était prévu qu’au moindre couac sur ce point, Google arrêtait tout » Libre Belgique, 27 april
2007
3. Sergey and I are seriously in the business of starting new things. Alphabet will also include our X lab, which incubates new
efforts like Wing, our drone delivery effort. We are also stoked about growing our investment arms, Ventures and Capital, as
part of this new structure. Alphabet Inc. will replace Google Inc. as the publicly-traded entity (...) Google will become a whollyowned subsidiary of Alphabet https://abc.xyz/
4. http://mondotheque.be
5. The Mundaneum is an Idea, an Institution, a Method, a Body of workmaterials and Collections, a Building, a Network. Paul
Otlet, Monde (1935)
6. The analyses of these themes are transmitted through narratives -- mythologies or fictions, which I have renamed as "figurations"
or cartographies of the present. A cartography is a politically informed map of one's historical and social locations, enabling the
analysis of situated formations of power and hence the elaboration of adequate forms of resistance Rosi Braidotti, Nomadic
Theory (2011)
7. Some people have said, "Why do I need the Semantic Web? I have Google!" Google is great for helping people find things, yes!
But finding things more easily is not the same thing as using the Semantic Web. It's about creating things from data you've
complied yourself, or combining it with volumes (think databases, not so much individual documents) of data from other sources
to make new discoveries. It's about the ability to use and reuse vast volumes of data. Yes, Google can claim to index billions of
pages, but given the format of those diverse pages, there may not be a whole lot more the search engine tool can reliably do.
We're looking at applications that enable transformations, by being able to take large amounts of data and be able to run models
on the fly - whether these are financial models for oil futures, discovering the synergies between biology and chemistry researchers
in the Life Sciences, or getting the best price and service on a new pair of hiking boots. Tim Berners-Lee interviewed in
Consortium Standards Bulletin, 2005 http://www.consortiuminfo.org/bulletins/semanticweb.php

P.20

P.21
Embedded
hierarchies

P.26

P.27

A
radiating
interview/
Un
entrevue
irradiant/
Een
irradiërend
gesprek
Stéphanie Manfroid and Raphaèle Cornille are responsible for the
Mundaneum archives in Mons. We speak with them about the relationship
between the universe of Otlet and the concrete practice of scanning, meta-data
and on-line publishing, and the possibilities and limitations of their work with
Google. How to imagine a digital archive that could include the multiple
relationships between all documents in the collection? How the make visible
the continuous work of describing, maintaining and indexing?

EN

The interview is part of a series of interviews with Belgian knowledge
institutions and their vision on digital information sharing. The voices of Sylvia
Van Peteghem and Dries Moreels (Ghent University), Églantine Lebacq and
Marc d'Hoore (Royal library of Belgium) resonate on the following pages.
We hear from them about the differences and similarities in how the three
institutions deal with the unruly practice of digital heritage.

The full interviews with the Royal Library of Belgium and Ghent University
Library can be found in the on-line publication.

• RC = Raphaèle Cornille (Mundaneum archive center, responsable des collections
iconographiques)
• SM = Stéphanie Manfroid (Mundaneum archive center, responsable des archives)
• ADV = Alexia de Visscher
• FS = Femke Snelting

Mons, 21 avril 2016
PAS MAL DE CHOSES À FAIRE

ADV : Dans votre politique de numérisation, quelle infrastructure d’accès envisagez-vous et
pour quel type de données et de métadonnées ?
RC : On numérise depuis longtemps au Mundaneum, depuis 1995. À l’époque, il y avait
déjà du matériel de numérisation. Forcément pas avec les même outils que l’on a aujourd’hui,
on n’imaginait pas avoir accès aux bases de données sur le net. Il y a eu des évolutions
techniques, technologiques qui ont été importantes. Ce qui fait que pendant quelques années
on a travaillé avec le matériel qui était toujours présent en interne, mais pas vraiment avec un
plan de numérisation sur le long terme. Juste pour répondre à des demandes, soit pour nous,
parce qu’on avait des publications ou des expositions ou parce qu’on avait des demandes
extérieures de reproductions.
L’objectif évidemment c’est de pouvoir mettre à la disposition du public tout ce qui a été
numérisé. Il faut savoir que nous avons une base de données qui s’appelle Pallas[1] qui a été
soutenue par la Communauté Française depuis 2003. Malheureusement, le logiciel nous
pose pas mal de problème. On a déjà tenté des intégrations d’images et ça ne s’affiche pas
toujours correctement. Parfois on a des fiches descriptives mais nous n’avons pas l’image qui
correspond.
SM : Les archives soutenues par la Communauté française, mais aussi d’autres centres, ont
opté pour Pallas. C’est ainsi que ce système permettait une compréhension des archives en
Belgique et en Communauté française notamment.

L’idée c’est que les centres d’archives utilisent tous un même système. C’est une belle
initiative, et dans ce cadre là, c’était l’idée d’avoir une plateforme générale, où toutes les
sources liées aux archives publiques, enfin les archives soutenues par la Communauté
Française - qui ne sont pas publiques d’ailleurs - puissent être accessibles à un seul et même
endroit.
RC : Il y avait en tout cas cette idée par la suite, d’avoir une plate-forme commune, qui
s’appelle numériques.be[2]. Malheureusement, ce qu’on trouve sur numeriques.be ne
correspond au contenu sur Pallas, ce sont deux structures différentes. En gros, si on veut
diffuser sur les deux, c’est deux fois le travail.
En plus, ils n’ont pas configuré numérique.be pour qu’il puisse être moissonné par
Europeana[3]. Il y a des normes qui ne correspondent pas encore.
SM : Ce sont des choix politiques là. Et nous on dépend de ça. Et nous, nous dépendons de
choix généraux. Il est important que l’on comprenne bien la situation d'centre d’archives
comme le nôtre. Sa place dans le paysage patrimoniale belge et francophone également.
Notre intention est de nous situer tant dans ce cadre qu’à un niveau européen mais aussi
international. Ce ne sont pas des combinaisons si aisées que cela à mettre en place pour ces
différents publics ou utilisateurs par exemple.
RC : Soit il y a un problème technique, soit il y a un problème d’autorisation. Il faut savoir
que c’est assez complexe au niveau des métadonnées, il y a pas mal de choses à faire. On a
pendant tout un temps numérisé, mais on a généré les métadonnées au fur et à mesure, donc
il y aussi un gros travail à réaliser par rapport à ça. Normalement, pour le début 2017 on
envisagera le passage à Europeana avec des métadonnées correctes et le fait qu’on puisse
verser des fichiers corrects.
C’est assez lourd comme travail parce que nous devons générer les métadonnées à chaque
fois. Si vous prenez le Dublin Core[4], c’est à chaque fois 23 champs à remplir par document.
On essaye de remplir le maximum. De temps en temps, ça peut être assez lourd quand
même.
LA VIE DE LA PIÈCE

FS : Pouvez-vous nous parler du détail de la lecture des documents d’Otlet et de la rédaction
de leur description, le passage d’un document « Otletien » à une version numérisée ?

P.30

P.31

RC : Il faut déjà au minimum avoir un inventaire. Il faut
que les pièces soient numérotées, sinon c’est un peu
difficile de retracer tout le travail. Parfois, ça passe par
une petite phase de restauration parce qu’on a des
documents poussiéreux et quand on scanne ça se voit.
Parfois, on doit faire des mises à plat, pour les journaux
par exemple, parce qu’ils sont pliés dans les boîtes. Ça
prend déjà un petit moment avant de pouvoir les
numériser. Ensuite, on va scanner le document, ça c’est la
partie la plus facile. On le met sur le scanner, on appuie
sur un bouton, presque.
Si c’est un manuscrit, on ne va pas pouvoir océriser. Par
contre, si c’est un document imprimé, là, on va l’océriser
en sachant qu’il va falloir le revérifier par la suite, parce
qu’il y a toujours un pourcentage d’erreur. Par exemple,
dans les journaux, en fonction de la typographie, si vous
avez des mots qui sont un peu effacés avec le temps, il
faut vérifier tout ça. Et puis, on va générer les
métadonnées Dublin Core. L’identifiant, un titre, tout ce
qui concerne les contributeurs : éditeurs, illustrateurs,
imprimeurs etc . c’est une description, c’est une
indexation par mots clefs, c’est une date, c’est une
localisation géographique, si il y en a une. C’est aussi,
faire des liens avec soit des ressources en interne soit des
ressources externes. Donc par exemple, moi si je pense à
une affiche, si elle a été dans une exposition si elle a été
publiée, il faut mettre toutes les références.

From Voor elk boek is een gebruiker:
SVP: Wij scannen op een totaal
andere manier. Bij Google gaat het
om massa-productie. Wij kiezen zelf
voor kleinere projecten. We hebben
een vaste ploeg, twee mensen die
voltijds scannen en beelden
verwerken, maar daarmee begin je
niet aan een project van 250.000
boeken. We doen wel een scan-ondemand of selecteren volledige
collecties. Toen we al onze
2.750.000 fiches enkele jaren
geleden door een externe firma lieten
scannen had ik medelijden met de
meisjes die de hele dag de
invoerscanner bedienden. Hopeloos
saai.
From X = Y:
According to the ideal image
described in "Traité", all the tasks of
collecting, translating, distributing,
should be completely automatic,
seemingly without the necessity of
human intervention. However, the
Mundaneum hired dozens of women
to perform these tasks. This humanrun version of the system was not
considered worth mentioning, as if it
was a temporary in-between phase
that should be overcome as soon as
possible, something that was staining
the project with its vulgarity.

SM : La vie de la pièce.
RC : Et faire le lien par exemple vers d’autres fonds, une autre lettre… Donc, vous avez
vraiment tous les liens qui sont là. Et puis, vous avez la description du fichier numérique en
lui-même. Nous on a à chaque fois quatre fichiers numériques : Un fichier RAW, un fichier
Tiff en 300 DPI, un JPEG en 300 DPI et un dernier JPE en 72 DPI, qui sont en fait les
trois formats qu’on utilise le plus. Et puis, là pareil, vous remettez un titre, une date, vous
avez aussi tout ce qui concerne les autorisations, les droits… Pour chaque document il y a
tout ces champs à remplir.
SM : Face à un schéma d’Otlet, on se demandait parfois ce que sont tous ces gribouillons.
On ne comprend pas tout de suite grand chose.
FS : Qui fait la description ? Plusieurs personnes ou quelqu’un qui travaille seul ?

RC : Ça demande quand même une certaine discipline, de la concentration et du temps pour
pouvoir le faire bien.
RC : Généralement c’est quelqu’un seul qui décrit. Là c’est un texte libre, donc c’est encore
assez facile. Maintenant quand vous devez indexer, il faut utiliser des Thesaurus existants, ce
qui n’est pas toujours facile parce que parfois ce sont des contraintes, et que ce n’est pas tout
à fait le vocabulaire que vous avez l’habitude d’utiliser.
SM : On a rencontré une firme, effectivement, quelqu’un qui pensait qu’on allait pouvoir
automatiser la chaîne de description des archives avec la numérisation y compris. Il ne
comprenait pas que c’était une tâche impossible. C’est une tâche humaine. Et franchement,
toute l’expérience qu’on peut avoir par rapport à ça aide énormément. Je ne pense pas, là
maintenant, qu’un cerveau humain puisse être remplacé par une machine dans ce cadre. Je
n’y crois pas.
UNE MÉTHODE D’INDEXATION STANDARDISÉE

FS : Votre travail touche très intimement à la pratique d’Otlet même. En fait, dans les
documents que nous avons consultés, nous avons vus plusieurs essais d’indexation, plusieurs
niveaux de systèmes de classement. Comment cela se croise-t-il avec votre travail de
numérisation ? Gardez-vous une trace de ces systèmes déjà projetés sur les documents euxmêmes ?
SM : Je crois qu’il y a deux éléments. Ici, si la question portait sur les étapes de la
numérisation, on part du document lui-même pour arriver à un nommage de fichier et il y a
une description avec plusieurs champs. Si finalement la pièce qui est numérisée, elle a sa
propre vie, sa propre histoire et c’est ça qu’on comprend. Par contre, au départ, on part du
principe que le fond est décrit, il y a un inventaire. On va faire comme si c’était toujours le
cas, ce n’est pas vrai d’ailleurs, ce n’est pas toujours le cas.
Et autre chose, aujourd’hui nous sommes un centre d’archives. Otlet était dans une
conception d’ouverture à la documentation, d’ouverture à l’Encyclopédie, vraiment quelque
chose de très très large. Notre norme de travail c’est d’utiliser la norme de description
générale des archives[5], et c’est une autre contrainte. C’est un gros boulot ça aussi.
On doit pouvoir faire des relations avec d’autres éléments qui se trouvent ailleurs, d’autres
documents, d’autres collections. C’est une lecture, je dirais presque en réseau des documents.
Évidemment c’est intéressant. Mais d’un autre côté, nous sommes archivistes, et c’est pas
qu’on n’aime pas la logique d’Otlet, mais on doit se faire à une discipline qui nous impose
aussi de protéger le patrimoine ici, qui appartient à la Communauté Française et qui donc
doit être décrit de manière normée comme dans les autres centres d’archives.

P.32

P.33

C’est une différence de dialogues. Pour moi ce n’est pas un détail du tout. Le fait que par
exemple, certains vont se dire « vous ne mettez pas l’indice CDU dans ces champs » ... vous
n’avez d’ailleurs pas encore posé cette question … ?
ADV : Elle allait venir !
SM : Aujourd’hui on ne cherche pas par indice CDU, c’est tout. Nous sommes un centre
d’archives, et je pense que ça a été la chance pour le Mundaneum de pouvoir mettre en
avant la protection de ce patrimoine en tant que tel et de pouvoir l’ériger en tant que
patrimoine réel, important pour la communauté.
RC : En fait la classification décimale n’étant pas une méthode d’indexation standardisée,
elle n’est pas demandée dans ces champs. Pour chaque champ à remplir dans le Dublin
Core, vous avez des normes à utiliser. Par exemple, pour les dates, les pays et la langue vous
avez les normes ISO, et la CDU n’est pas reconnue comme une norme.
Quand je décris dans Pallas, moi je mets l’indice CDU. Parce que les collections
iconographiques sont classées par thématique. Les cartes postales géographiques sont
classées par lieu. Et donc, j’ai à chaque fois l’indice CDU, parce que là, ça a un sens de le
mettre.
FS : C’est très beau d’entendre cela mais c’est aussi tragique dans un sens. Il y a eu tellement
d’efforts faits à cette époque là pour trouver un standard ...
UN AXE DE COMMUNICATION

SM : La question de la légitimité du travail d’Otlet se place sur un débat contemporain qui
est amené sur la gestion des bases de données, en gros. Ça c’est un axe qui est de
communication, ce n’est pas le seul axe de travail de fond dans nos archives. Il faut distinguer
des éléments et la politique de numérisation, je ne suis pas en train de vouloir dire : « Tiens,
on est dans la gestion de méga-données chez nous. »
Nous ne gérons pas de grandes quantités de données. Le Big Data ne nous concerne pas
tout à fait, en terme de données conservées chez nous. Le débat nous intéresse au même titre
que ce débat existait sous une autre forme fin du 19e siècle avec l’avènement de la presse
périodique et la multiplication des titres de journaux ainsi que la diffusion rapide d’une
information.
RC : Le fait d’avoir eu Paul Otlet reconnu comme père de l’internet etcetera, d’avoir pu le
rattacher justement à des éléments actuels, c’était des sujets porteurs pour la communication.
Ça ne veut pas dire que nous ne travaillons que là dessus. Il en a fait beaucoup plus que ça.
C’était un axe porteur, parce qu’on est à l’ère de la numérisation, parce qu’on nous demande

de numériser, de valoriser. On est encore à travailler sur les archives, à dépouiller les
archives, à faire des inventaires et donc on est très très loin de ces réflexions justement Big
Data et tout ça.
FS : Est-il imaginable qu’Otlet ait inventé le World Wide Web ?
SM : Franchement, pour dire les choses platement : C’est impossible, quand on a un regard
historique, d’imaginer qu’Otlet a imaginé… enfin il a imaginé des choses, oui, mais est-ce
que c’est parce que ça existe aujourd’hui qu’on peut dire « il a imaginé ça » ?. C’est ce qu’on
appelle de l’anachronisme en Histoire. Déontologiquement, ce genre de choses un historien
ne peut pas le faire. Quelqu’un d’autre peut se permettre de le faire. Par exemple, en
communication c’est possible. Réduire à des idées simples est aussi possible. C’est même un
avantage de pouvoir le faire. Une idée passera donc mieux.
RC : Il y a des concepts qu’il avait déjà compris.
From Voor elk boek is een gebruiker:
Maintenant, en fonction de l’époque, il n’a pas pu tout
Dus in de 19e eeuw wou Vander
mettre en place mais, il y a des choses qu’il avait
Haeghen een catalogus, en Otlet een
comprises dès le départ. Par exemple, standardiser les
bibliografie. En vandaag heeft Google
alles samen met de volledige tekst
choses pour pouvoir les changer. Ça il le comprend dès
erbij die dan nog op elk woord
le départ, c’est pour ça, la rédaction des fiches, c’est
doorzoekbaar is. Dat is de droom van
standardisé, vous ne pouvez pas rédiger n’importe
zowel Vander Haeghen als Otlet
méér dan verder zetten. Vanuit die
comment. C’est pour ça qu’il développe la CDU, il faut
gedachte zijn wij vanzelfsprekend
un langage qui soit utilisable par tous. Il imagine avec les
meegegaan. We hebben aan de
Google onderhandelaars gevraagd:
moyens de communications qu’il a à l’époque, il imagine
waarom doet Google dit? Het
déjà un moment pouvoir les combiner, sans doute parce
antwoord was: “Because it's in the
qu’il a vu un moment l’évolution des techniques et qu’il
heart of the founders”. Moesten wij de
idealen van Vander Haeghen en
pense pouvoir aller plus loin. Il pense à la
Otlet niet als voorbeeld hebben
dématérialisation quand il utilise des microfilms, il se dit
gehad, dan was er misschien twijfel
« attention la conservation papier, il y a un soucis. Il faut
geweest, maar nu niet.
conserver le contenu et donc il faut le passer sur un autre
support ». D’abord il va essayer sur des plaques
photographiques, il calcule le nombre de pages qu’il peut mettre sur une plaque et voilà. Il
transforme ça en autre support.
Je pense qu’il a imaginé des choses, parce qu’il avait cette envie de communiquer le savoir,
ce n’est pas quelqu’un qui a un moment avait envie de collectionner sans diffuser, non. C’était
toujours dans cette idée de diffuser, de communiquer quelques soient les personnes, quelque
soit le pays. C’est d’ailleurs pour ça qu’il adapte le Musée International, pour que tout le
monde puisse y aller, même ceux qui ne savaient pas lire avaient accès aux salles et
pouvaient comprendre, parce qu’il avait organisé les choses de telles façons. Il imagine à
chaque fois des outils de communication qui vont lui servir pour diffuser ses idées, sa pensée.

P.34

P.35

Qu’il ait imaginé à un moment donné qu’on puisse lire des choses à l’autre bout du monde ?
Il a du y penser, mais maintenant, techniquement et technologiquement, il n’a pas pu
concevoir. Mais je suis sûre qu’il avait envisagé le concept.
CELUI QUI FAIT UN PEU DE TOUT, IL LE FAIT UN PEU
MOINS BIEN

SM : Otlet, à son époque, a par moments réussi à se faire détester par pas mal de gens,
parce qu’il y avait une sorte de confusion au niveau des domaines dans lesquels il exerçait. À
la fois, cette fascination de créer une cité politique qui est la Cité Mondiale, et le fait de
vouloir mélanger les genres, de ne pas être dans une volonté de standardisation avec des
spécialistes, mais aussi une volonté de travailler avec le monde de l’industrie, parce que c’est
ce qu’il a réussi. C’est un réel handicap à cette époque là parce que vous avez une
spécialisation dans tous les domaines de la connaissance et finalement celui qui fait un peu de
tout, il le fait un peu mal moins bien. Dans certains milieux ou après une lecture très
superficielle du travail mené par Otlet, on comprend que le personnage bénéficie d’un a
priori négatif car il a mélangé les genres ou les domaines. Par exemple, Otlet s’est attaqué à
différentes institutions pour leur manque d’originalité en terme de bibliographie. La
Bibliothèque Royale en a fait les frais. Ça peut laisser quelques traces inattendues dans
l’histoire. L’héritage d’Otlet en matière bibliographique n’est pas forcément mis en évidence
dans un lieu tel que la bibliothèque nationale. C’est on le comprend difficile d’imaginer une
institution qui explique certains engagements de manière aussi personnalisée ou
individualisée. On va plutôt parler d’un service et de son histoire dans une période plus
longue. On évite ainsi d’entrer dans des détails tels que ceux-là.
Effectivement, il y a à la fois le Monsieur dans son époque, la vision que les scientifiques vont
en garder aujourd’hui et des académiques. Et puis, il y a la fascination de tout un chacun.
Notre travail à nous, c’est de faire de tout. C’est à la fois de faire en sorte que les archives
soient disponibles pour le tout un chacun, mais aussi que le scientifique qui a envie d’étudier,
dans une perspective positive ou négative, puisse le faire.
ON EST PAS DANS L’OTLETANEUM ICI !

FS : Le travail d’Otlet met en relation l’organisation du savoir et de la communication.
Comment votre travail peut-il, dans un centre d’archives qui est aussi un lieu de rencontre et
un musée, être inspiré - ou pas - par cette mission qu’Otlet s’était donné ?
SM : Il y a quand même un chose qui est essentielle, c’est qu’on est pas dans l’Otletaneum
ici, on n’est pas dans la fondation Otlet.

Nous sommes un centre d’archives spécialisé, qui a conservé toutes les archives liées à une
institution. Cette institution était animée par des hommes et des femmes. Et donc, ce qui les
animaient, c’était différentes choses, dont le désir de transmission. Et quand à Otlet, on a
identifié son envie de transmettre et il a imaginé tous les moyens. Il n’était pas ingénieur non
plus, il ne faut pas rire. Et donc, c’est un peu comme Jules Verne, il a rêvé le monde, il a
imaginé des choses différentes, des instruments. Il s’est mis à rêver à certaines choses, à des
applications. C’est un passionné, c’est un innovateur et je pense qu’il a passionné des gens
autour de lui. Mais, autour de lui, il y avait d’autres personnes, notamment Henri La
Fontaine, qui n’est pas moins intéressant. Il y avait aussi le Baron Descamps et d’autres
personnes qui gravitaient autour de cette institution. Il y avait aussi tout un contexte
particulier lié notamment à la sociologie, aux sciences sociales, notamment Solvay, et voilà.
Tout ceux qu’on retrouve et qui ont traversé une quarantaine d’années.
Aujourd’hui, nous sommes un centre d’archives avec des supports différents, avec cette
volonté encyclopédique qu’ils ont eu et qui a été multi supports, et donc l’œuvre phare n’a
pas été uniquement Le Traité de Documentation. C’était intéressant de comprendre sa
genèse avec les visites que vous aviez fait, mais il y d’autres fonds, notamment des fonds liés
au pacifisme, à l’anarchisme et au féminisme. Et aussi tout ce département iconographique
avec ces essais un peu particuliers qui ne sont pas super connus.
Donc on n’est pas dans l’Otletaneum et nous ne sommes pas dans le sanctuaire d’Otlet.
ADV : La question est plutôt : comment s’emparer de sa vision dans votre travail ?
SM : J’avais bien compris la question.
En rendant accessible ses archives, son patrimoine et en participant à la meilleure
compréhension à travers nos efforts de valorisation : des publications, visites guidées mais
aussi le programme d’activités qui permettent de mieux comprendre son travail. Ce travail
s’effectue notamment à travers le label du Patrimoine Européen mais aussi dans le cadre de
Mémoire du Monde[6].
RC : Ce n’est pas parce que Otlet a écrit que La Fontaine n’a pas travaillé sur le projet. Ce
n’était pas du tout les mêmes personnalités.
SM : On est sur des stéréotypes.
ADV : Otlet a tout de même énormément écrit ?
SM : Otlet a beaucoup synthétisé, diffusé et lu. Il a été un formidable catalyseur de son
époque.
RC : C’est plutôt perdre la pensée d’Otlet en allant dans un seul sens, parce que lui il voulait
justement brasser des savoirs, diffuser l’ensemble de la connaissance. Pour nous l’objectif

P.36

P.37

c’est vraiment de pouvoir tout exploiter, tous les sujets, tous les supports, toutes les
thématiques… Quand on dit qu’il a préfiguré internet, c’est juste deux schémas d’Otlet et on
tourne autour de deux schémas depuis 2012, même avant d’ailleurs, ces deux schémas A4.
Ils ne sont pas grands.
SM : Ce qui n’est pas juste non plus, c’est le caractère réducteur par lequel on passe quand
on réduit le Mundaneum à Otlet et qu’on ne réduit Otlet qu’à ça. Et d’un autre côté, ce que
je trouve intéressant aussi, c’est les autres personnalités qui ont décidé de refaire aussi le
monde par la fiche et là, notre idée était évidemment de mettre en évidence toutes ces
personnes et les compositions multiformes de cette institution qui avait beaucoup d’originalité
et pas de s’en tenir à une vision « La Fontaine c’est le prix Nobel de la paix, Otlet c’est
monsieur Internet, Léonie La Fontaine c’est Madame féminisme, Monsieur Hem Day[7] c’est
l’anarchiste … » On ne fait pas l’Histoire comme ça, en créant des catégories.
RC : Je me souviens quand je suis arrivée ici en 2002 : Paul Otlet c’était l’espèce de savant
fou qui avait voulu créer une cité mondiale et qui l’avait proposée à Hitler. Les gens avaient
oublié tout ce qu’il avait fait avant.
Vous avez beaucoup de bibliothèques qui aujourd’hui encore classent au nom de la CDU
mais ils ne savent pas d’où ça vient. Tout ce travail on l’a fait et ça remettait, quand même,
les choses à leur place et on l’a ouvert quand même au public. On a eu des ouvertures avec
des différents publics à partir de ce moment là.
SM : C’est aussi d’avoir une vision globale sur ce que les uns et les autres ont fait et aussi de
ce qu’a été l’institution, ce qui est d’ailleurs l’une des plus grosse difficulté qui existe. C’est de
s’appeler Mundaneum dans l’absolu.
On est le « Mundaneum Centre d’archives » depuis 1993. Mais le Mundaneum c’est une
institution qui nait après la première guerre mondiale, dont le nom est postérieur à l'IIB.
Dans ses gênes, elle est bibliographique et peut-être que ce sont ces différentes notions qu’il
faut essayer d’expliquer aux gens.
Mais c’est quand même formidable de dire que Paul Otlet a inventé internet, pourquoi pas.
C’est une formule et je pense que dans l’absolu la formule marque les gens. Maintenant, il
n’a pas inventé Google. J’ai bien dit Internet.
POUR LA CARICATURE, C’EST SYMPA. POUR LA RÉALITÉ
MOINS.

FS : Qu’est ce que votre collaboration avec Google vous a-t-elle apportée ? Est-ce qu'ils vous
ont aidé à numériser des documents?

RC : C’est nous qui avons numérisé. C’est moi qui met les images en ligne sur Google.
Google n’a rien numérisé.
ADV : Mais donc vous vous transmettez des images et des métadonnées à Google mais le
public n’a pas accès à ces images … ?
RC : Ils ont accès, mais ils ne peuvent pas télécharger.
FS : Les images que vous avez mises sur Google Cultural Institute sont aujourd’hui dans le
domaine public et donc en tant que public, je ne peux pas voir que les images sont libres de
droit, parce qu’elles sont toutes sous la licence standard de Google.
RC : Ils ont mis « Collection de la Fédération Wallonie Bruxelles » à chaque fois. Puisque
ça fait partie des métadonnées qui sont transmises avec l’image.
ADV : Le problème, actuellement, comme il n’y a pas de catalogue en ligne, c’est qu’il n’y a
pas tant d’autres accès. À part quelques images sur numeriques.be, quand on tape « Otlet »
sur un moteur de recherche, on a l’impression que ce n’est que via le Google Cultural Institute
par lequel on a accès et en réalité c’est un accès limité.
SM : C’est donc une impression.
RC : Vous avez aussi des images sur Wikimedia commons. Il y a la même chose que sur
Google Cultural Institute. C’est moi qui les met des deux cotés, je sais ce que je mets. Et là
je suis encore en train d’en uploader dessus, donc allez y. Pour l’instant, c’est de nouveau des
schémas d’Otlet, en tout cas des planches qui sont mises en ligne.
Sur Wikimédia Commons je sais pas importer les métadonnées automatiquement. Enfin
j’importe un fichier et puis je dois entrer les données moi-même. Je ne peux pas importer un
fichier Excel. Dans Google je fais ça, j’importe les images et ça se fait tout seul.
AV : Et vous pouvez pas trouver une collaboration avec les gens de Wikimédia Commons ?
RC : En fait, ils proposent des systèmes d’importations mais qui ne fonctionnent pas ou alors
qui ne fonctionnent pas avec Windows. Et donc, moi je ne vais pas commencer à installer un
PC qui fonctionne avec Linux ou Ubuntu juste pour pouvoir uploader sur Wikimédia.
AV : Mais eux peuvent le faire ?
RC : On a eu la collaboration sur Le traité de Documentation, puisque c’est eux qui ont
travaillés. Ils ont tout retranscrit.
Aussi, il faut dédommager les bénévoles. Ça je peux vous garantir. Ils sont bénévoles jusqu’à
un certain point. Mais si vous leur confiez du travail comme ça … Ils sont bénévoles parce

P.38

P.39

que quand ils retravaillent des fiches sur Wikipédia, parce que c’est leur truc, ils en ont envie,
c’est leur volonté.
Je ne mets pas plus sur Google Cultural Institute que sur Wikipédia. Je ne favorise pas
Google. Ce qu’il y a sur le Cultural Institute, c’est qu’on a la possibilité de réaliser des
expositions virtuelles et quand j’upload là, c’est parce qu’on a une exposition qui va être faite.
On essaye de faire des expositions virtuelles. C’est vrai que ça fonctionne bien pour nous en
matière de communication pour les archives. Ça, il ne faut pas s’en cacher. J’ai beaucoup de
demandes qui arrivent, des demandes d’images, par ce biais là. Ça nous permet de valoriser
des fonds et des thématiques qu’on ne pourrait pas faire dans l’espace.
On a fait une exposition sur Léonie Lafontaine, qui a permis de mettre en ligne une centaine
de documents liés au féminisme, ça n’avait jamais été fait avant. C’était très intéressant et ça
a eu un bon retour pour les autres expositions aussi. Moi, c’est plutôt comme ça que j’utilise
Google Cultural Institute. Je ne suis pas pro Google mais là, j’ai un outil qui me permet de
valoriser les archives.
ADV : Google serait-il la seule solution pour valoriser vos archives ?
SM : Notre solution c’est d’avoir un logiciel à nous. Pourquoi avoir cette envie d’alimenter
d’autres sites ? Parce qu’on ne l’a pas sur le nôtre. Pour rappel, on travaille pour la
Communauté Française qui est propriétaire des collections et avec laquelle on est
conventionné. Elle ne nous demande pas d’avoir un logiciel externe. Elle demande qu’on ait
notre propre produit aussi. Et c’est là dessus que l’on travaille depuis 2014, pour le
remplacement de Pallas, parce que ça fait des années qu’ils nous disent qu’ils ne vont plus
soutenir. C’est plutôt ça qui nous met dans une situation complètement incompréhensible.
Comment voulez vous qu’on puisse faire transparaître ce que nous avons si on n’a pas un
outil qui permette aux chercheurs, quels qu’ils soient, scientifiques ou non, pour qu’ils
puissent être autonomes dans leur recherches ? Et pour nous, le travail que nous avons fait
en terme d’inventaire et de numérisation, qu’il soit exploitable de manière libre ?
Moi, franchement, je me demande, si cette question et cette vision que vous avez, elle ne se
poserait pas si finalement nous étions déjà sur autre chose que Pallas. On est dans un
inconfort de travail de base.
Je pense aussi que l’information à donner de notre part c’est de dire « il y a tout ceci qui
existe, venez le voir ».
On arrive à sensibiliser aussi sur les collections qu’il y a au centre d’archives et c’est bien,
c’est tout à fait intéressant. Maintenant ce serait bien aussi de franchir une autre étape et
d’éduquer sur l'ouverture au patrimoine. C’est ça aussi notre mission.

Donc Google a sa propre politique. Nous avons mis à disposition quelques expositions et
ceci en est l’intérêt. Mais on a quand même tellement de partenaires différents avec lesquels
on a travaillé. On ne privilégie pas un seul partenaire. Aujourd’hui, certaines firmes viennent
vers nous parce qu’elles ont entendu parler justement plus de Google que du Mundaneum et
en même temps du Mundaneum par l’intermédiaire de Google.
Ce sont des éléments qui nous permettent d’ouvrir peut-être le champ du dialogue avec
d’autres partenaires mais qui ne permettent pas d’aller directement en profondeur dans les
archives, enfin, dans le patrimoine réel que l’on a.
Je veux dire, on aura beau dire qu’on fait autre chose, on ne verra que celui là parce que
Google est un mastodonte et parce que ça parle à tout le monde. On est dans une aire de
communication particulière.
RC : Maintenant la collaboration Google et l’image que vous en avez et bien nous on en
pâtit énormément au niveau des archives. Et encore, parce que souvent les gens nous disent
« mais vous avez un gros mécène »
SM : Ils nous réduisent à ça. Pour la caricature c’est sympa. Pour la réalité moins.
FS : Quand on parle aux gens de l’Université de Gand, c’est clair que leur collaboration avec
Google Books a eu une autre fonction. Ce ne sont que des livres, des objets qui sont scannés
de manière assez brutes. Il n’y a pas de métadonnées complexes, c’est plutôt une question de
volume.
SM : La politique de numérisation de l’Université de
Gand, je pense, est plus en lien avec ce que Google
imagine. C’est-à-dire quelle est la plus value que ça leur
apporte de pouvoir travailler à la fois une bibliothèque
universitaire telle que la bibliothèque de l’Université de
Gand, et le fait de l’associer avec le Mundaneum ?
FS : C’est aussi d'autres besoins, un autre type d’accès ?
Dans une bibliothèque les livres sont là pour être lus, j’ai
l’impression que ce n’est pas la même vision pour un
centre d’archives.
SM : C’est bien plus complexe dans d’autres endroits.

From Voor elk boek is een gebruiker:
SVP: Maar ... je kan niet bij Google
gaan aankloppen, Google kiest jou.
Wij hebben wel hun aandacht
gevraagd voor het Mundaneum met
de link tussen Vander Haeghen en
Otlet. Als Google België iets
organiseert, proberen ze ons altijd te
betrekken, omdat wij nu eenmaal een
universiteit zijn. U heeft het
Mundaneum gezien, het is een zeer
mooi archief, maar dat is het ook.
Voor ons zou dat enkel een stuk van
een collectie zijn. Ze worden ook op
een totaal andere manier gesteund
door Google dan wij.

Notre intention en terme de numérisation n’est pas celle
là, et nous ne voyons pas notre action, nous, uniquement
par ce biais là. À Gand, ils ont numérisé des livres. C’est leur choix soutenu par la Région
flamande. De notre côté, nous poursuivons une même volonté d’accès pour le public et les

P.40

P.41

chercheurs mais avec un matériel un patrimoine, bien différent de livres publiés uniquement !
Le travail avec Google a permis de collaborer plusieurs fois avec l’Université mais nous
l’avions déjà fait avant de se retrouver avec Google sur certaines activités et l’accueil de
conférenciers. Donc, il y a un partenariat avec l’Université gantoise qui est intéressée par
l’histoire d’Otlet, l’histoire des idées mais aussi de l’internationalisme, de l’architecture de la
schématique. C’est d’ailleurs très enrichissant comme réflexion.
TOUT NUMÉRISER

FS : J’ai entendu quelqu’un se demander « pourquoi ne pas numériser toutes les fiches
bibliographiques qui sont dans les tiroirs » ?
RC : Ça ne sert à rien. Toutes les fiches ça n’aurait pas de sens. Maintenant, ce serait
intéressant d’en étudier quelques-unes.
Il y avait un réseau aussi autour du répertoire. C’est à dire que si on a autant de fiches, ce
n'est pas seulement parce qu’on a des fiches qui ont été rédigées à Bruxelles, on a des fiches
qui viennent du monde entier. Dans chaque pays il y avait des institutions responsables de
réaliser des bibliographies et de les renvoyer à Bruxelles.
Ça serait intéressant d’avoir un échantillon de toutes ces institutions ou de toutes ces fiches
qui existent. Ça permettrait aussi de retrouver la trace de certaines institutions qui n’existent
plus aujourd’hui. On a quand même eu deux guerres, il y a eu des révolutions etcetera. Ils
ont quand même travaillé avec des institutions russes qui n’existent plus aujourd’hui. Par ce
biais là, on pourrait retrouver leur trace. Même chose pour des ouvrages. Il y a des ouvrages
qui n’existent plus et pour lesquels on pourrait retrouver la trace. Il faut savoir qu’après la
deuxième guerre mondiale, en 46-47, le président du Mundaneum est Léon Losseau. Il est
avocat, il habite Mons, sa maison d’ailleurs est au 37 rue de Nimy, pas très loin. Il collabore
avec le Mundaneum depuis ses débuts et donc vu que les deux fondateurs sont décédés
pendant la guerre, à ce moment là il fait venir l’UNESCO à Bruxelles. Parce qu’on est dans
une phase de reconstruction des bibliothèques, beaucoup de livres ont été détruits et on
essaye de retrouver leur traces. Il leur dit « venez à Bruxelles, nous on a le répertoire de tous
ces bouquins, venez l’utiliser, nous on a le répertoire pour reconstituer toutes les
bibliothèques ».
Donc, tout numériser, non. Mais numériser certaines choses pour montrer le mécanisme de
ce répertoire, sa constitution, les différents répertoires qui existaient dans ce répertoire et de
pouvoir retrouver la trace de certains éléments, oui.
Si on numérise tout, cela permettrait d’avoir un état des lieux des sources d’informations qui
existaient à une époque pour un sujet.
SM : Le cheminement de la pensée.

Il y a des pistes très intéressantes qui vont nous permettre d’atteindre des aspects
protéiformes de l’institution, mais c’est vaste.
LA MÉMOIRE VIVE DE L’INSTITUTION

FS : Nous étions très touchées par les fiches annotées de la CDU que vous nous avez
montrées la dernière fois que nous sommes venues.
RC : Le travail sur le système lui-même.
SM : C’est fantastique effectivement, avec l’écriture d’Otlet.
SM : Autant on peut dire qu'Otlet est un maître du marketing, autant il utilisait plusieurs
termes pour décrire une même réalité. C’est pour ça que ne s’attacher qu’à sa vision à lui
c’est difficile. Comme classer ses documents, c’est aussi difficile.
ADV : Otlet n’a-t-il pas laissé suffisamment de documentation ? Une documentation qui
explicite ses systèmes de classement ?
RC : Quand on a ouvert les boîtes d'Otlet en 2002, c’était des caisses à bananes non
classées, rien du tout. En fonction de ce qu’on connaissait de l’histoire du Mundaneum à
l’époque on a pu déterminer plus ou moins des frontières et donc on avait l'Institut
international de bibliographie, la CDU, la Cité Mondiale aussi, le Musée International.
SM : Du pacifisme ...
RC : On a appelé ça « Mundapaix » parce qu’on ne savait pas trop comment le mettre dans
l’histoire du Mundaneum, c’était un peu bizarre. Le reste, on l'avait mis de côté parce qu’on
n'était pas en mesure, à ce moment là, de les classer dans ce qu’on connaissait. Puis, au fur
et à mesure qu’on s’est mis à lire les archives, on s’est mis à comprendre des choses, on a
découvert des institutions qui avaient été créées en plus et ça nous a permis d’aller
rechercher ces choses qu’on avait mises de coté.
Il y avait tellement d’institutions qui ont été créées, qui ont pu changer de noms, on ne sait
pas si elles ont existé ou pas. Il faisait une note, il faisait une publication où il annonçait :
« l’office centrale de machin chose » et puis ce n'est même pas sûr qu’il ait existé quelque
part.

P.42

P.43

Parfois, il reprend la même note mais il change certaines
choses et ainsi de suite … rien que sa numérotation c’est
pas toujours facile. Vous avez l’indice CDU, mais
ensuite, vous avez tout le système « M » c’est la référence
aux manuels du RBU. Donc il faut seulement aller
comprendre comment le manuel du RBU est organisé.
C’est à dire trouver des archives qui correspondent pour
pouvoir comprendre cette classification dans le « M ».
RC : On n’a pas trouvé un moment donné, et on aurait
bien voulu trouver, un dossier avec l’explication de son
classement. Sauf qu’il ne nous l’a pas laissé.
SM : Peut-être qu’il est possible que ça ait existé, et je
me demande comment cette information a été expliquée
aux suivants. Je me demande même si George Lorphèvre
savait, parce qu'il n’a pas pu l’expliquer à Boyd
Rayward. En tout cas, les explications n’ont pas été
transmises.

From De Indexalist:
"Bij elke verwijzing stond weer een
andere verwijzing, de één nog
interessanter dan de ander. Elk
vormde de top van een piramide van
weer verdere literatuurstudie, zwanger
met de dreiging om af te dwalen. Elk
was een strakgespannen koord dat
indien niet in acht genomen de auteur
in de val van een fout zou lokken, een
vondst al uitgevonden en
opgeschreven."
From The Indexalist:
“At every reference stood another
reference, each more interesting than
the last. Each the apex of a pyramid
of further reading, pregnant with the
threat of digression, each a thin high
wire which, if not observed might lead
the author into the fall of error, a
finding already found against and
written up.”

L’équipe du Mundaneum a développé une expérience
de plusieurs années et une compréhension sur les archives et leur organisation. Nous avons
par exemple découvert l’existence de fichiers particuliers tels que les fichiers « K ». Ils sont
liés à l’organisation administrative interne. Il a fallu montrer les éléments et archives sur
lesquels nous nous sommes basés pour bien prouver la démarche qui était la nôtre. Certains
documents expliquaient clairement cela. Mais si vous ne les avez jamais vu, c’est difficile de
croire un nouvel élément inconnu !
RC : On n’a pas beaucoup d’informations sur l’origine des collections, c’est-à-dire sur
l’origine des pièces qui sont dans les collections. Par hasard, je vais trouver un tiroir où il est
mis « dons » et à l’intérieur, je ne vais trouver que des fiches écrites à la main comme « dons
de madame une telle de deux drapeaux pour le Musée International » et ainsi de suite.
Il ne nous a pas laissé un manuel à la fin de ses archives et c’est au fur et à mesure qu’on lit
les archives qu’on arrive à faire des liens et à comprendre certains éléments. Aujourd’hui,
faire une base de données idéale, ce n’est pas encore possible, parce qu’il y a encore
beaucoup de choses que nous-mêmes on ne comprend pas. Qu’on doit encore découvrir.
ADV : Serait-il imaginable de produire une documentation issue de votre cheminement dans
la compréhension progressive de cette classification ? Par exemple, des textes enrichis donnant
une perception plus fine, une trace de la recherche. Est-ce que c’est quelque chose qui pourrait
exister ?
RC : Oui, ce serait intéressant.

Par exemple si on prend le répertoire bibliographique. Déjà, il n’y a pas que des références
bibliographiques dedans. Vous avez deux entrées : entrée par matière, entrée par auteur,
donc vous avez le répertoire A et le répertoire B. Si vous regardez les étiquettes, parfois,
vous allez trouver autre chose. Parfois, on a des étiquettes avec « ON ». Vous savez ce que
c’est ? C’est « catalogue collectif des bibliothèque de Belgique ». C’est un travail qu’ils ont
fait à un moment donné. Vous avez les « LDC » les « Bibliothèques collectifs de sociétés
savantes ». Chaque société ayant un numéro, vous avez tout qui est là. Le « K » c’est tout ce
qui est administratif donc à chaque courrier envoyé ou reçu, ils rédigeaient une fiche. On a
des fiches du personnel, on sait au jour le jour qui travaillait et qui a faisait quoi… Et ça, il
ne l’a pas laissé dans les archives.
SM : C’est presque la mémoire vive de l’institution.
On a eu vraiment cette envie de vérifier dans le répertoire cette façon de travailler, le fait
qu’il y ait des informations différentes. Effectivement, c’était un peu avant 2008, qu’on l'a su
et cette information s’est affinée avec des vérifications. Il y a eu des travaux qui ont pu être
faits avec l’identification de séries particulières des dossiers numérotés que Raphaèle a
identifié. Il y avait des correspondances et toute une structuration qu’on a identifié aussi. Ce
sont des sections précises qui ont permis d’améliorer, à la fois la CDU, au départ de faire la
CDU, de faire le répertoire et puis de créer d’autres sections, comme la section féministe,
comme la section chasse et pêche comme la section iconographique. Et donc, par rapport à
ça, je pense qu’il y a vraiment tout un travail qui doit être mis en relation à partir d’une
observation claire, à partir d’une réflexion claire de ce qu’il y a dans le répertoire et dans les
archives. Et ça, c’est un travail qui se fait étape par étape. J’espère qu’on pourra quand
même bien avancer là dessus et donner des indications qui permettront d’aller un peu plus
loin, je ne suis pas sûre qu’on verra le bout.
C’est au moins de transmettre une information, de faire en sorte qu’elle soit utilisable et que
certains documents et ces inventaires soient disponibles, ceux qui existent aujourd’hui. Et que
ça ne se perde pas dans le temps.
FS : Un jour, pensez-vous pouvoir dire « voilà, maintenant c’est fini, on a compris » ?
SM : Je ne suis pas sûre que ce soit si impossible que ça.
Ça dépend de notre volonté et dialogue autour de ces documents. Un dialogue entre les
chercheurs de tout type et l’équipe du Mundaneum enrichit la compréhension. Plus on est
nombreux autour de certains points, plus la compréhension s’élargit. Ça implique bien
entendu une implication de partenaires externes également.
Aujourd’hui on est passé à une politique de numérisation par un matériel, par une
spécialisation du personnel. Et je pense que cette spécialisation nous a permis, depuis des
années, d’aller un peu plus profondément dans les archives et donc de mieux les comprendre.

P.44

P.45

Il y a un historique que l’on comprend véritablement bien aussi, il ne demande qu’à se
déployer. Il y a à comprendre comment on va pouvoir valoriser cela autour de journées,
autour de publications, autour d’outils qui sont à notre disposition. Et donc, autour de
catalogues en ligne, notamment, et de notre propre catalogue en ligne.
C’EST ÇA QU’IL FAUT IMAGINER

FS : Les méthodes et les standards de documentation changent, l’histoire institutionnelle et les
temps changent, les chercheurs passent… vous avez vécu avec tout ça depuis longtemps. Je
me demande comment le faire transparaître, le faire ressentir?
SM : C’est vrai qu’on aimerait bien pouvoir axer aussi la communication de l’institution sur
ces différents aspects. C’est bien ça notre rêve en fait, ou notre aspiration. Pour l’instant, on
est plutôt en train de se demander comment on va mieux communiquer, sur ce que nous
faisons nous ?
RC : Est-ce que ce serait uniquement en mettant en ligne des documents ? Ou imaginer une
application qui permettrait de les mettre en œuvre? Par exemple, si je prends la
correspondance, moi j’ai lu à peu près 3000 courriers. En les lisant, on se rend vraiment
compte du réseau. C’est-à-dire qu’on se rend compte qu’il a de la correspondance à peu près
partout dans le monde. Que ce soit avec des particuliers, avec des bibliothèques, avec des
universités, avec des entreprises et donc déjà rien qu’avec cet échantillon-là, ça donne une
masse d’informations. Maintenant, si on commence à décrire dans une base de données,
lettre par lettre, je ne suis pas sûre que cela apporte quelque chose. Par contre, si on imagine
une application qui permette de faire ressortir sur une carte à chaque fois le nom des
correspondants, là, ça donne déjà une idée et ça peut vraiment mettre en œuvre toute cette
correspondance. Mais prise seule juste comme ça, est-ce que c’est vraiment intéressant ?
Dans une base de données dite « classique », c’est ça aussi le problème avec nos archives, le
Mundaneum n'étant pas un centre d’archives comme les autres de par ses collections, c’est
parfois difficile de nous adapter à des standards existants.
ADV : Il n’y aurait pas qu’un seul catalogue ou pas une seule manière de montrer les
données. C’est bien ça ?
RC : Si vous allez sur Pallas vous avez la hiérarchie du fond Otlet. Est-ce que ça parle à
quelqu’un, à part quelqu’un qui veut faire une recherche très spécifique ? Mais sinon ça ne
lui permet pas de vraiment visualiser le travail qui a été fait, et même l’ampleur du travail.
Nous, on ne peut pas se conformer à une base de donnée comme ça. Il faut que ça existe
mais ça ne transparaît pas le travail d'Otlet et de La Fontaine. Une vision comme ça, ce n'est
pas Mundaneum.

SM : Il n’y a finalement pas de base de données qui arrive à la cheville de ce qu’ils ont
imaginés en terme de papier. C’est ça qu’il faut imaginer.
FS : Pouvez-vous nous parler de cette vision d’un catalogue possible ? Si vous aviez tout
l’argent et tout le temps du monde ?
SM : On ne dort plus alors, c’est ça ?
Il y a déjà une bonne structure qui est là, et l’idée c’est vraiment de pouvoir lier les
documents, les descriptions. On peut aller plus loin dans les inventaires et numériser les
documents qui sont peut-être les plus intéressants et peut-être les plus uniques. Maintenant,
le rêve serait de numériser tout, mais est-ce que ce serait raisonnable de tout numériser ?
FS : Si tous les documents étaient disponibles en ligne ?
RC : Je pense que ça serait difficile de pouvoir transposer la pensée et le travail d'Otlet et
La Fontaine dans une base de données. C’est à dire, dans une base de données, c’est
souvent une conception très carrée : vous décrivez le fond, la série, le dossier, la pièce. Ici
tout est lié. Par exemple, la collection d’affiches, elle dépend de l’Institut International de
Photographie qui était une section du Mundaneum, c’était la section qui conserve l’image.
Ça veut dire que je dois d’abord comprendre tous les développements qui ont eu lieu avec le
concept de documentation pour ensuite lier tout le reste. Et c’est comme ça pour chaque
collection parce que ce ne sont pas des collections qui sont montées par hasard, elles
dépendaient à chaque fois d’une section spécialisée. Et donc, transposer ça dans une base de
données, je ne sais pas comment on pourrait faire.
Je pense aussi qu’aujourd’hui on n’est pas encore assez loin dans les inventaires et dans toute
la compréhension parce qu’en fait à chaque fois qu’on se plonge dans les archives, on
comprend un peu mieux, on voit un peu plus d’éléments, un peu plus de complexité, pour
vraiment pouvoir lier tout ça.
SM : Effectivement nous n’avons pas encore tout compris, il y a encore tous les petits
offices : office chasse, office pêche et renseignements…
RC : À la fin de sa vie, il va aller vers tout ce qui est standardisation, normalisation. Il va être
membre d’associations qui travaillent sur tout ce qui est norme et ainsi de suite. Il y a cet
aspect là qui est intéressant parce que c’est quand même une grande évolution par rapport au
début.
Avec le Musée International, c’est la muséographie et la muséologie qui sont vraiment une
grosse innovation à l’époque. Il y a déjà des personnes qui s’y sont intéressé mais peut-être
pas suffisamment.

P.46

P.47

Je rêve de pouvoir reconstituer virtuellement les salles d’expositions du Musée International,
parce que ça devait être incroyable de voyager là dedans. On a des plans, des photos. Même
si on n’a plus d’objets, on a suffisamment d’informations pour pouvoir le faire. Et il serait
intéressant de pouvoir étudier ce genre de salle même pour aujourd’hui, pour la
muséographie d’aujourd’hui, de reprendre exemple sur ce qu’il a fait.
FS : Si on s’imagine le Mundaneum virtuel, vraiment, si on essaye de le reconstruire à partir
des documents, c’est excitant !
SM : On en parle depuis 2010, de ça.
FS : C’est pas du tout comme le scanner hig-tech de Google Art qui passe devant le Mona
Lisa …
SM : Non. C’est un autre travail
FS : Ce n’est pas ça le musée virtuel.
RC : C’est un autre boulot.
Last
Revision:
2·08·2016

1. Logiciel fourni par la Communauté française aux centres d’archives privées. « Pallas permet de décrire, de gérer et de consulter
des documents de différents types (archives, manuscrits, photographies, images, documents de bibliothèques) en tenant compte
des conditions de description spécifiques à chaque type de document. » http://www.brudisc.be/fr/content/logiciel-pallas
2. « Images et histoires des patrimoines numérisés » [1]
3. « Notre mission : On transforme le monde par la culture! Nous voulons construire sur le riche héritage culturel européen et
donner aux gens la possibilité de le réutiliser facilement, pour leur travail, pour leur apprentissage personnel ou tout simplement
pour s’amuser. » http://www.europeana.eu
4. « The Dublin Core Metadata Initiative (DCMI) supports shared innovation in metadata design and best practices across a
broad range of purposes and business models. » http://dublincore.org/about-us/
5. La norme générale et internationale de description archivistique, ISAD(G) http://www.ica.org/sites/default/files/

CBPS_2000_Guidelines_ISAD%28G%29_Second-edition_FR.pdf

6. « L'UNESCO a mis en place le Programme Mémoire du monde en 1992. Cette mise en oeuvre est d'abord née de la prise
de conscience de l'état de préservation alarmant du patrimoine documentaire et de la précarité de son accès dans différentes
régions du monde. » http://www.unesco.org/new/fr/communication-and-information/memory-of-the-

world/about-the-programme

7. Marcel Dieu dit Hem Day

Amateur
Librarian
-A
Course
in
Critical
Pedagogy
Tomislav Medak & Marcell Mars (Public Library project)

A proposal for a curriculum in amateur librarianship, developed through the
activities and exigencies of the Public Library project. Drawing from a historic
genealogy of public library as the institution of access to knowledge, the
proletarian tradition of really useful knowledge and the amateur agency driven
by technological development, the curriculum covers a range of segments from
immediately applicable workflows for scanning, sharing and using e-books,
over politics and tactics around custodianship of online libraries, to applied
media theory implicit in the practices of amateur librarianship. The proposal is
made with further development, complexification and testing in mind during the
future activities of the Public Library and affiliated organizations.
PUBLIC LIBRARY, A POLITICAL GENEALOGY

Public libraries have historically achieved as an institutional space of exemption from the
commodification and privatization of knowledge. A space where works of literature and
science are housed and made accessible for the education of every member of society
regardless of their social or economic status. If, as a liberal narrative has it, education is a
prerequisite for full participation in a body politic, it is in this narrow institutional space that
citizenship finds an important material base for its universal realization.

P.48

P.49

The library as an institution of public access and popular literacy, however, did not develop
before a series of transformations and social upheavals unfolded in the course of 18th and
19th century. These developments brought about a flood of books and political demands
pushing the library to become embedded in an egalitarian and democratizing political
horizon. The historic backdrop for these developments was the rapid ascendancy of the book
as a mass commodity and the growing importance of the reading culture in the aftermath of
the invention of the movable type print. Having emerged almost in parallel with capitalism, by
the early 18th century the trade in books was rapidly expanding. While in the 15th century
the libraries around the monasteries, courts and universities of Western Europe contained no
more than 5 million manuscripts, the output of printing presses in the 18th century alone
exploded to formidable 700 million volumes.[1] And while this provided a vector for the
emergence of a bourgeois reading public and an unprecedented expansion of modern
science, the culture of reading and Enlightenment remained largely a privilege of the few.
Two social upheavals would start to change that. On 2 November 1789 the French
revolutionary National Assembly passed a decision to seize all library holdings from the
Church and aristocracy. Millions of volumes were transferred to the Bibliothèque Nationale
and local libraries across France. At the same time capitalism was on the rise, particularly in
England. It massively displaced the impoverished rural population into growing urban
centres, propelled the development of industrial production and, by the mid-19th century,
introduced the steam-powered rotary press into the commercial production of books. As
books became more easily mass-produced, the
commercial subscription libraries catering to the better-off
parts of society blossomed. This brought the class aspect
of the nascent demand for public access to books to the
fore.
After the failed attempt to introduce universal suffrage
and end the system of political representation based on
property entitlements through the Reform Act of 1832,
the English Chartist movement started to open reading
rooms and cooperative lending libraries that would
quickly become a popular hotbed of social exchange
between the lower classes. In the aftermath of the
revolutionary upheavals of 1848, the fearful ruling
classes finally consented to the demand for tax-financed
public libraries, hoping that the access to literature and
edification would after all help educate skilled workers
that were increasingly in demand and ultimately
hegemonize the working class for the benefits of
capitalism's culture of self-interest and competition.[2]

management hierarchies, and national
security issues. Various sets of these
conditions that are at work in a
particular library, also redefine the
notion of publishing and of the
publication, and in turn the notion of
public.

From Bibliothécaire amateur un cours de pédagogie
critique:
Puisqu'il était de plus en plus facile de
produire des livres en masse, les
bibliothèques privées payantes, au
service des catégories privilégiées de
la société, ont commencé à se
répandre. Ce phénomène a mis en
relief la question de la classe dans la
demande naissante pour un accès
public aux livres.

REALLY USEFUL KNOWLEDGE
[3]

It's no surprise that the Chartists, reeling from a political defeat, had started to open reading
rooms and cooperative lending libraries. The education provided to the proletariat and the
poor by the ruling classes of that time consisted, indeed, either of a pious moral edification
serving political pacification or of an inculcation of skills and knowledge useful to the factory
owner. Even the seemingly noble efforts of the Society for the Diffusion of the Useful
Knowledge, a Whig organization aimed at bringing high-brow learning to the middle and
working classes in the form of simplified and inexpensive publications, were aimed at dulling
the edge of radicalism of popular movements.[4]
These efforts to pacify the downtrodden masses pushed them to seek ways of self-organized
education that would provide them with literacy and really useful knowledge – not applied,
but critical knowledge that would allow them to see through their own political and economic
subjection, develop radical politics and innovate shadow social institutions of their own. The
radical education, reliant on meagre resources and time of the working class, developed in the
informal setting of household, neighbourhood and workplace, but also through radical press
and communal reading and discussion groups.[5]
The demand for really useful knowledge encompassed a critique of “all forms of ‘provided’
education” and of the liberal conception “that ‘national education’ was a necessary condition
for the granting of universal suffrage.” Development of radical “curricula and pedagogies”
formed a part of the arsenal of “political strategy as a means of changing the world.”[6]
CRITICAL PEDAGOGY

This is the context of the emergence of the public library. A historical compromise between a
push for radical pedagogy and a response to dull its edge. And yet with the age of
digitization, where one would think that the opportunities for access to knowledge have
expanded immensely, public libraries find themselves increasingly limited in their ability to
acquire and lend both digital and paper editions. It is a sign of our radically unequal times
that the political emancipation finds itself on a defensive fighting again for this material base of
pedagogy against the rising forces of privatization. Not only has mass education become
accessible only under the condition of high fees, student debt and adjunct peonage, but the
useful knowledge that the labour market and reproduction of the neoliberal capitalism
demands has become the one and only rationale for education.

P.50

P.51

No wonder that over the last 6-7 years we have seen self-education, shadow libraries and
amateur librarians emerge again to counteract the contraction of spaces of exemption that
have been shrunk by austerity and commodity.
The project Public Library was initiated with the counteraction in mind. To help everyone
learn to use simple tools to be able to act as an Amateur Librarian – to digitize, to collect, to
share, to preserve books and articles that were unaffordable, unavailable, undesirable in the
troubled corners of the Earth we hail from.
Amateur Librarian played an important role in the narrative of Public Library. And it seems
it was successful. People easily join the project by 'becoming' a librarian using Calibre[7] and
[let’s share books].[8] Other aspects of the Public Library narrative add a political articulation
to that simple yet disobedient act. Public Library detects an institutional crisis in education,
an economic deadlock of austerity and a domination of commodity logic in the form of
copyright. It conjures up the amateur librarians’ practice of sharing books/catalogues as a
relevant challenge against the convergence of that crisis, deadlock and copyright regime.
To understand the political and technological assumptions and further develop the strategies
that lie behind the counteractions of amateur librarians, we propose a curriculum that is
indebted to a tradition of critical pedagogy. Critical pedagogy is a productive and theoretical
practice rejecting an understanding of educational process that reduces it to a technique of
imparting knowledge and a neutral mode of knowledge acquisition. Rather, it sees the
pedagogy as a broader “struggle over knowledge, desire, values, social relations, and, most
important, modes of political agency”, “drawing attention to questions regarding who has
control over the conditions for the production of knowledge.”[9]

No industry in the present demonstrates more the
asymmetries of control over the conditions of production
of knowledge than the academic publishing. The denial
of access to outrageously expensive academic
publications for many universities, particularly in the
Global South, stands in stark contrast to the super-profits
that a small number of commercial publishers draws from
the free labour of scientists who write, review and edit
contributions and the extortive prices their institutional
libraries have to pay for subscriptions. It is thus here that
the amateur librarianship attains its poignancy for a
critical pedagogy, inviting us to closer formulate and
unfold its practices in a shared process of discovery.
A CURRICULUM

Public library is:
• free access to books for every member of society,
• library catalogue,
• librarian.

The curriculum in amateur librarianship develops aspects
and implications of this definition. Parts of this curriculum
have evolved over a number of workshops and talks
previously held within the Public Library project, parts of
it are yet to evolve from a process of future research,
exchange and knowledge production in the education
process. While schematic, scaling from the immediately
practical, over strategic and tactical, to reflexive registers
of knowledge, there are actual – here unnamed – people
and practices we imagine we could be learning from.
The first iteration of this curriculum could be either a
summer academy rostered with our all-star team of
librarians, designers, researchers and teachers, or a small
workshop with a small group of students delving deeper
into one particular aspect of the curriculum. In short it is
an open curriculum: both open to educational process
and contributions by others. We welcome comments,
derivations and additions.

From Bibliothécaire

amateur un cours de pédagogie
critique:
Actuellement, aucune industrie ne
montre plus d'asymétries au niveau du
contrôle des conditions de production
de la connaissance que celle de la
publication académique. Refuser
l'accès à des publications
académiques excessivement chères
pour beaucoup d'universités, en
particulier dans l'hémisphère sud,
contraste ostensiblement avec les
profits énormes qu'un petit nombre
d'éditeurs commerciaux tirent du
travail bénévole de scientifiques qui
écrivent, révisent et éditent des
contributions et avec les prix
exorbitants des souscriptions que les
bibliothèques institutionnelles doivent
payer.
From Voor elk boek is een
gebruiker:
FS: Hoe gaan jullie om met boeken
en publicaties die al vanaf het begin
digitaal zijn? DM: We kopen e-books
en e-tijdschriften en maken die
beschikbaar voor onderzoekers. Maar
dat zijn hele andere omgevingen,
omdat die content niet fysiek binnen
onze muren komt. We kopen toegang
tot servers van uitgevers of de
aggregator. Die content komt nooit bij
ons, die blijft op hun machines staan.
We kunnen daar dus eigenlijk niet
zoveel mee doen, behalve verwijzen
en zorgen dat het evengoed vindbaar
is als de print.

P.52

P.53

MODULE 1: WORKFLOWS
• from book to e-book
◦ digitizing a book on a
book scanner
◦ removing DRM and
converting e-book
formats
• from clutter to catalogue
◦ managing an e-book
library with Calibre
◦ finding e-books and
articles on online
libraries
• from reference to bibliography
◦ annotating in an ebook reader device or
application
◦ creating a scholarly
bibliography in Zotero
• from block device to network device
◦ sharing your e-book
library on a local
network to a reading
device
◦ sharing your e-book
library on the
internet with [let’s
share books]
• from private to public IP space
◦ using [let’s share
books] &
library.memoryoftheworld.org
◦ using logan & jessica
◦ using Science Hub
◦ using Tor

MODULE 2: POLITICS/TACTICS
• from developmental subordination to subaltern disobedience
◦ uneven development &
political strategies
◦ strategies of the
developed v strategies
of the
underdeveloped : open
access v piracy
• from property to commons
◦ from property to
commons
◦ copyright, scientific
publishing, open
access
◦ shadow libraries,
piracy,
custodians.online
• from collection to collective action
◦ critical pedagogy &
education
◦ archive, activation &
collective action

MODULE 3: ABSTRACTIONS IN ACTION
• from linear to computational
◦ library &
epistemology:
catalogue, search,
discovery, reference
◦ print book v e-book:
page, margin, spine
• from central to distributed
◦ deep librarianship &
amateur librarians

P.54

P.55

◦ network infrastructure
(s)/topologies (ruling
class studies)
• from factual to fantastic
◦ universe as library as
universe

READING LIST
• Mars, Marcell; Vladimir, Klemo. Download & How to:
Calibre & [let’s share books]. Memory of the World (2014)
https://www.memoryoftheworld.org/blog/2014/10/28/
calibre-lets-share-books/
• Buringh, Eltjo; Van Zanden, Jan Luiten. Charting the “Rise of
the West”: Manuscripts and Printed Books in Europe, A
Long-Term Perspective from the Sixth through Eighteenth
Centuries. The Journal of Economic History (2009) http://
journals.cambridge.org/article_S0022050709000837
• Mattern, Shannon. Library as Infrastructure. Places Journal
(2014) https://placesjournal.org/article/library-asinfrastructure/
• Antonić, Voja. Our beloved bookscanner. Memory of the
World (2012) https://www.memoryoftheworld.org/
blog/2012/10/28/our-beloved-bookscanner-2/
• Medak, Tomislav; Sekulić, Dubravka; Mertens, An. How to:
Bookscanning. Memory of the World (2014) https://
www.memoryoftheworld.org/blog/2014/12/08/how-tobookscanning/
• Barok, Dusan. Talks/Public Library. Monoskop (2015)
http://monoskop.org/Talks/Public_Library
• Custodians.online. In Solidarity with Library Genesis and
Science Hub (2015) http://custodians.online
• Battles, Matthew. Library: An Unquiet History Random
House (2014)
• Harris, Michael H. History of Libraries of the Western World.
Scarecrow Press (1999)
• MayDay Rooms. Activation (2015) http://
maydayrooms.org/activation/
• Krajewski, Markus. Paper Machines: About Cards &
Catalogs, 1548-1929. MIT Press (2011) https://
library.memoryoftheworld.org/b/
PaRC3gldHrZ3MuNPXyrh1hM1meyyaqvhaWlHTvr53NRjJ2k

For updates: https://www.zotero.org/groups/amateur_librarian__a_course_in_critical_pedagogy_reading_list
Last
Revision:
1·08·2016

1. For an economic history of the book in the Western Europe see Eltjo Buringh and Jan Luiten Van Zanden, “Charting the ‘Rise
of the West’: Manuscripts and Printed Books in Europe, A Long-Term Perspective from the Sixth through Eighteenth
Centuries,” The Journal of Economic History 69, No. 02 (June 2009): 409–45, doi:10.1017/S0022050709000837,
particularly Tables 1-5.
2. For the social history of public library see Matthew Battles, Library: An Unquiet History (Random House, 2014) chapter 5:
“Books for all”.
3. For this concept we remain indebted to the curatorial collective What, How and for Whom/WHW, who have presented the
work of Public Library within the exhibition Really Useful Knowledge they organized at Museo Reina Sofía in Madrid,
October 29, 2014 – February 9, 2015.
4. “Society for the Diffusion of Useful Knowledge,” Wikipedia, the Free Encyclopedia, June 25, 2015, https://

en.wikipedia.org/w/index.php?
title=Society_for_the_Diffusion_of_Useful_Knowledge&oldid=668644340.

5. Richard Johnson, “Really Useful Knowledge,” in CCCS Selected Working Papers: Volume 1, 1 edition, vol. 1 (London u.a.:
Routledge, 2014), 755.
6. Ibid., 752.
7. http://calibre-ebook.com/
8. https://www.memoryoftheworld.org/blog/2014/10/28/calibre-lets-share-books/
9. Henry A. Giroux, On Critical Pedagogy (Bloomsbury Academic, 2011), 5.

P.56

P.57

Bibliothécaire
amateur
- un
cours de
pédagogie
critique
Tomislav Medak & Marcell Mars (Public Library project)

Proposition de programme d'études de bibliothécaire amateur développé à
travers les activités et les exigences du projet Public Library. Prenant pour
base la généalogie historique de la bibliothèque publique en tant qu'institution
permettant l'accès à la connaissance, la tradition prolétaire de la connaissance
réellement utile et la puissance de l'amateur motivée par le développement
technologique, le programme couvre différents secteurs : depuis les flux de
travail directement applicables comme la numérisation, le partage et l'utilisation
de livres électroniques, à la politique et la tactique de conservation des
bibliothèques en ligne, en passant par la théorie médiatique appliquée qui est
implicite dans les pratiques du bibliothécaire amateur. La proposition est plus
amplement développée, complexifiée et sera testée durant les futures activités
de Public Library et des organisations affiliées.
BIBLIOTHÈQUE PUBLIQUE : UNE GÉNÉALOGIE POLITIQUE

Historiquement, les bibliothèques publiques sont parvenues à être un espace institutionnel
exempté de la marchandisation et de la privatisation de la connaissance. Un espace dans
lequel les œuvres littéraires et scientifiques sont abritées et rendues accessibles pour
l'éducation de chaque membre de la société, quel que soit son statut social ou économique.
Si, du point de vue libéral, l'éducation est un prérequis à la véritable participation au corps

politique, c'est dans cet espace institutionnel étroit que la citoyenneté trouve une base
matérielle importante à sa réalisation universelle.
Si aujourd'hui elle est une institution d'accès public et de savoir populaire, il a fallu une série
de transformations et de bouleversements sociaux au 18e et 19e siècle pour que la
bibliothèque se développe. Ces développements ont provoqué l'arrivée d'un flot de livres et
d'exigences politiques qui ont encouragé la bibliothèque à s'intégrer dans un horizon politique
démocratisant et égalitaire. En toile de fond historique de ces développements, il y eut
l'ascendance rapide du livre en tant que commodité de masse et l'importance croissante de la
culture de la lecture suite à l'invention des caractères d'imprimerie mobiles. Ayant émergé à
la même époque que le capitalisme, au début du 18e siècle le commerce des livres, était en
pleine expansion. Alors qu'au 15e siècle, en Europe occidentale, les bibliothèques qui se
trouvaient autour des monastères, des tribunaux et des universités ne contenaient pas plus de
cinq millions de manuscrits, la production de l'imprimerie a atteint 700 millions de volumes,
et ce, au 18e siècle seulement.[1] Et alors que cela a offert un vecteur à l'émergence d'un
public de lecteurs bourgeois et contribué à une expansion sans précédent de la science
moderne, la culture de la lecture et des Lumières restait alors principalement le privilège
d'une minorité.
Deux bouleversements sociaux allaient commencer à changer cela. Le 2 novembre 1789,
l'Assemblée nationale de la Révolution française a approuvé la saisie de tous les biens
bibliothécaires de l'Église et de l'aristocratie. Des millions de volumes ont été transférés à la
Bibliothèque Nationale ainsi qu'aux bibliothèques régionales, à travers la France. Au même
moment, le capitalisme progressait, en particulier en Angleterre. Ce mouvement a
massivement déplacé une population rurale pauvre dans les centres urbains en pleine
croissance et propulsé le développement de la production industrielle. À la moitié du 19e
siècle, il a également a introduit la presse typographique à vapeur dans la production
commerciale de livres. Puisqu'il était de plus en plus facile de produire des livres en masse,
les bibliothèques privées payantes, au service des
catégories privilégiées de la société, ont commencé à se
répandre. Ce phénomène a mis en relief la question de la
classe dans la demande naissante pour un accès public
aux livres.
Après une tentative ratée d'introduction du suffrage
universel en vue d'en finir avec le système de
représentation politique basée sur les droits de propriété à
travers l'Acte de réforme de 1832, le mouvement anglais
du chartisme a commencé à ouvrir des salles de lectures
et des bibliothèques de prêts coopératifs qui allaient
bientôt devenir un foyer pour l'échange social entre les
classes populaires. Suite aux mouvements
révolutionnaires de 1848, les classes dirigeantes

P.58

P.59

apeurées ont fini par accepter de répondre à la demande qui réclamait des librairies financées
par l'argent public. Elles espéraient qu'un accès à la littérature et à l'édification favoriserait
l'éducation des travailleurs qualifiés qui étaient de plus en plus en demande, mais
souhaitaient aussi maintenir l'hégémonie sur la classe ouvrière au profit de la culture du
capitalisme, de l'intérêt personnel et de la compétition.[2]
LA CONNAISSANCE RÉELLEMENT UTILE
[3]

Sans surprise, les chartistes, qui s'étaient retrouvés chancelants après une défaite politique,
avaient commencé à ouvrir des salles de lecture et des bibliothèques de prêts coopératifs. En
effet, à l'époque, l'éducation proposée au prolétariat et aux pauvres par les classes dirigeantes
consistait, soit à une édification morale pieuse au service de la pacification politique, soit à
l'inculcation de qualifications ou de connaissances qui seraient utiles au propriétaire de
l'usine. Même les efforts aux allures nobles de la Society for the Diffusion of the Useful
Knowledge, une organisation du parti whig cherchant à apporter un apprentissage intellectuel
à la classe ouvrière et à la classe moyenne sous la forme de publications bon marché et
simplifiées, avaient pour objectif l'atténuation de la tendance radicale des mouvements
populaires[4]
Ces efforts de pacification des masses opprimées les ont poussées à chercher des manières
d'organiser par elles-mêmes une éducation qui leur apporterait l'alphabétisation et une
connaissance réellement utile : une connaissance non pas appliquée, mais critique qui leur
permettrait de voir à travers leur propre soumission politique et économique, de développer
une politique radicale et d'innover leurs propres institutions sociales d'opposition. L'éducation
radicale, dépendante du peu de ressources et du manque de temps de la classe ouvrière, s'est
développée dans les cadres informels des foyers, des quartiers et des lieux de travail, mais
également à travers une presse radicale, une lecture commune et des groupes de discussion.[5]
La demande pour une connaissance réellement utile comprenait une critique de « toute
forme d'éducation “fournie” » et de la conception libérale selon laquelle « une “éducation
nationale” était une condition nécessaire à la garantie du suffrage universel ». Un
développement de « programmes et de pédagogies » radicaux constituait une part de l'arsenal
de « stratégie politique comme moyen de changer le monde »[6]
PÉDAGOGIE CRITIQUE

L'émergence de la bibliothèque publique a donc eu lieu dans le contexte d'un compromis
historique entre la formation des fondements d'une pédagogie radicale et une réaction visant
à l'atténuer. Pourtant, à l'âge de la numérisation dans lequel nous pourrions penser que les
opportunités pour un accès à la connaissance se sont largement étendues, les bibliothèques

publiques se retrouvent particulièrement limitées dans leurs possibilités d'acquérir et de prêter
des éditions aussi bien sous une forme papier que numérique. Cette difficulté est un signe de
l'inégalité radicale de notre époque : une fois encore, l'émancipation politique se bat de
manière défensive pour une base matérielle pédagogique contre les forces croissantes de la
privatisation. Non seulement l'éducation de masse est devenue accessible à prix d'or
uniquement, entrainant la dette étudiante et la servitude qui y est associée, mais la
connaissance utile exigée par le marché du travail et la reproduction du capitalisme néolibéral
sont devenues la seule logique de l'éducation.
Sans surprise, au cours des six-sept dernières années, nous avons vu l'apprentissage
autodidacte, les bibliothèques de l'ombre et les bibliothécaires amateurs émerger pour contrer
la contraction des espaces d'exemption réduits par l'austérité et la commodification. Le projet
Public Library a été initié dans l'idée de contrer ce phénomène. Pour aider tout le monde à
apprendre l'utilisation d'outils simples permettant d'agir en tant qu'Amateur Librarian :
numériser, rassembler, partager, préserver des livres, des articles onéreux, introuvables ou
indésirables dans les coins mouvementés de notre planète.
Amateur Librarian a joué un rôle important dans le système narratif de Public Library. Un
rôle qui semble avoir porté ses fruits. Les gens rejoignent facilement le projet en « devenant »
bibliothécaire grâce à l'outil Calibre[7] et [let’s share books].[8] D'autres aspects du narratif de
Public Library ajoutent une articulation politique à cet acte simple, mais désobéissant. Public
Library perçoit une crise institutionnelle dans l'éducation, une impasse économique
d'austérité et une domination de la logique de commodité sous la forme du droit d'auteur.
Elle fait apparaitre la pratique du partage de livres et de catalogues des bibliothécaires
amateurs comme un défi pertinent à l'encontre de la convergence de cette crise, de cette
impasse et du régime du droit d'auteur.
Pour comprendre les hypothèses politiques et technologiques et développer plus en
profondeur les stratégies sur lesquelles les réactions des bibliothécaires amateurs se basent,
nous proposons un programme issu de la tradition pédagogique critique. La pédagogie
critique est une pratique productive et théorique qui rejette la définition du procédé
éducationnel comme réduit à une simple technique de communication de la connaissance et
présentée comme un mode d'acquisition neutre. Au contraire, la pédagogie est perçue plus
largement comme « une lutte pour la connaissance, le désir, les valeurs, les relations sociales,
et plus important encore, les modes d'institution politique », « une attention portée aux
questions relatives au contrôle des conditions de production de la connaissance. »[9]

P.60

P.61

Actuellement, aucune industrie ne montre plus
d'asymétries au niveau du contrôle des conditions de
production de la connaissance que celle de la publication
académique. Refuser l'accès à des publications
académiques excessivement chères pour beaucoup
d'universités, en particulier dans l'hémisphère sud,
contraste ostensiblement avec les profits énormes qu'un
petit nombre d'éditeurs commerciaux tirent du travail
bénévole de scientifiques qui écrivent, révisent et éditent
des contributions et avec les prix exorbitants des
souscriptions que les bibliothèques institutionnelles
doivent payer. C'est donc ici que la bibliothèque amateur
atteint le sommet de son intensité en matière de
pédagogie critique : elle nous invite à formuler et à narrer
plus précisément sa pratique à travers un processus
partagé de découverte.
UN PROGRAMME

Une bibliothèque publique, c'est :
• un libre accès aux livres pour tous les membres de la
société,
• un catalogue de bibliothèque,
• un bibliothécaire.

From Amateur

Librarian - A
Course in Critical Pedagogy:
No industry in the present
demonstrates more the asymmetries of
control over the conditions of
production of knowledge than the
academic publishing. The denial of
access to outrageously expensive
academic publications for many
universities, particularly in the Global
South, stands in stark contrast to the
super-profits that a small number of
commercial publishers draws from the
free labour of scientists who write,
review and edit contributions and the
extortive prices their institutional
libraries have to pay for subscriptions.
From Voor elk boek is een
gebruiker:
FS: Hoe gaan jullie om met boeken
en publicaties die al vanaf het begin
digitaal zijn? DM: We kopen e-books
en e-tijdschriften en maken die
beschikbaar voor onderzoekers. Maar
dat zijn hele andere omgevingen,
omdat die content niet fysiek binnen
onze muren komt. We kopen toegang
tot servers van uitgevers of de
aggregator. Die content komt nooit bij
ons, die blijft op hun machines staan.
We kunnen daar dus eigenlijk niet
zoveel mee doen, behalve verwijzen
en zorgen dat het evengoed vindbaar
is als de print.

Le programme de bibliothécaire amateur développe
plusieurs aspects et implications d'une telle définition.
Certaines parties du programme ont été construites à
partir de différents ateliers et exposés qui se déroulaient précédemment dans le cadre du
projet Public Library. Certaines parties de ce programme doivent encore évoluer s'appuyant
sur un processus de recherche futur, d'échange et de production de connaissance dans le
processus éducatif. Tout en restant schématique en allant de la pratique immédiate, à la
stratégie, la tactique et au registre réflectif de la
connaissance, il existe des personnes et pratiques - non
citées ici - desquelles nous imaginons pouvoir apprendre.
La première itération de ce programme pourrait aussi
bien être une académie d'été avec notre équipe
sélectionnée de bibliothécaires, concepteurs, chercheurs,
professeurs, qu'un petit atelier avec un groupe restreint
d'étudiants se plongeant dans un aspect précis du
programme. En résumé, ce programme est ouvert, aussi

bien au processus éducationnel qu'aux contributions des autres. Nous sommes ouverts aux
commentaires, aux dérivations et aux ajouts.
MODULE 1 : FLUX DE TRAVAIL
• du livre au livre électronique
◦ numériser un livre
avec un scanner de
livres
◦ supprimer la gestion
des droits numériques
et convertir au format
livre numérique
• du désordre au catalogue
◦ gérer une bibliothèque
de livres numériques
avec Calibre
◦ trouver des livres
numériques et des
articles dans des
bibliothèques en ligne
• de la référence à la bibliographie
◦ annoter à partir d'une
application ou d'un
appareil de lecture de
livres électroniques
◦ créer une
bibliographie
académique sur Zotero
• du dispositif de bloc au périphérique réseau
◦ partager votre
bibliothèque de livres
numériques d'un
périphérique local à
un appareil de lecture
◦ partager votre
bibliothèque de livres
numériques sur
internet avec [let’s
share books]

P.62

P.63

• de l'espace IP privé à l'espace IP public
◦ utiliser [let’s share
books] et
library.memoryoftheworld.org
◦ utiliser logan &
jessica
◦ utiliser Science Hub
◦ utiliser Tor

MODULE 2 : POLITIQUE/TACTIQUE
• du développement de la subordination à la désobéissance
subalterne
◦ développement inégal
et stratégies
politiques
◦ stratégies de
développement contre
les stratégies de sous
développement : accès
ouvert contre piratage
• de la propriété au commun
◦ de la propriété au
commun
◦ droit d'auteur,
publication
scientifique, accès
ouvert
◦ bibliothèque de
l'ombre, piratage,
custodians.online
• de la collection à l'action collective
◦ pédagogie critique et
éducation
◦ archive, activation et
action collective

MODULE 3 : ABSTRACTIONS DANS L'ACTION
• du linéaire à l'informatique
◦ bibliothèque

◦ livre imprimé et livre
numérique : page,
marge, dos
• du central au distribué
◦ bibliothécaires
professionnels et
bibliothécaires
amateurs
◦ infrastructure(s) de
réseau/topologies
(études des classes
dirigeantes)
• du factuel au fantastique
◦ l'univers pour
bibliothèque, la
bibliothèque pour
univers

LISTE DE LECTURE
• Mars, Marcell; Vladimir, Klemo. Download & How to:
Calibre & [let’s share books]. Memory of the World (2014)
https://www.memoryoftheworld.org/blog/2014/10/28/
calibre-lets-share-books/
• Buringh, Eltjo; Van Zanden, Jan Luiten. Charting the “Rise of
the West”: Manuscripts and Printed Books in Europe, A
Long-Term Perspective from the Sixth through Eighteenth
Centuries. The Journal of Economic History (2009) http://
journals.cambridge.org/article_S0022050709000837
• Mattern, Shannon. Library as Infrastructure. Places Journal
(2014) https://placesjournal.org/article/library-asinfrastructure/
• Antonić, Voja. Our beloved bookscanner. Memory of the
World (2012) https://www.memoryoftheworld.org/
blog/2012/10/28/our-beloved-bookscanner-2/
• Medak, Tomislav; Sekulić, Dubravka; Mertens, An. How to:
Bookscanning. Memory of the World (2014) https://
www.memoryoftheworld.org/blog/2014/12/08/how-tobookscanning/
• Barok, Dusan. Talks/Public Library. Monoskop (2015)
http://monoskop.org/Talks/Public_Library
• Custodians.online. In Solidarity with Library Genesis and
Science Hub (2015) http://custodians.online

P.64

P.65

• Battles, Matthew. Library: An Unquiet History Random
House (2014)
• Harris, Michael H. History of Libraries of the Western World.
Scarecrow Press (1999)
• MayDay Rooms. Activation (2015) http://
maydayrooms.org/activation/
• Krajewski, Markus. Paper Machines: About Cards &
Catalogs, 1548-1929. MIT Press (2011) https://
library.memoryoftheworld.org/b/
PaRC3gldHrZ3MuNPXyrh1hM1meyyaqvhaWlHTvr53NRjJ2k

Dernière version: https://www.zotero.org/groups/amateur_librarian__a_course_in_critical_pedagogy_reading_list
Last
Revision:
1·08·2016

1. 1. Pour une histoire économique du livre en Europe occidentale, voir Eltjo Buringh et Jan Luiten Van Zanden, « Charting the
‘Rise of the West’ : Manuscripts and Printed Books in Europe, A Long-Term Perspective from the Sixth through Eighteenth
Centuries, » The Journal of Economic History 69, n°. 02 (juin 2009) : 409–45, doi :10.1017/S0022050709000837, en
particulier les tableaux 1-5.
2. 2. Pour une histoire sociale de la bibliothèque publique, voir Matthew Battles, Library: An Unquiet History (Random House,
2014) chapitre 5 : “Books for all”.
3. 3. Pour ce concept, nous sommes redevables au collectif de curateurs What, How and for Whom/WHW, qui a présenté le
travail de Public Library dans le cadre de l'exposition Really Useful Knowledge qu'ils ont organisée au Museo Reina Sofía à
Madrid, entre 29 octobre 2014 et le 9 février 2015.
4. 4. « Society for the Diffusion of Useful Knowledge, » Wikipedia, the Free Encyclopedia, Juin 25, 2015, https://

en.wikipedia.org/w/index.php?
title=Society_for_the_Diffusion_of_Useful_Knowledge&oldid=668644340.

5. 5. Richard Johnson, « Really Useful Knowledge, » dans CCCS Selected Working Papers: Volume 1, 1 édition, vol. 1
(Londres u.a. : Routledge, 2014), 755.
6. Ibid., 752.
7. http://calibre-ebook.com/
8. https://www.memoryoftheworld.org/blog/2014/10/28/calibre-lets-share-books/
9. Henry A. Giroux, On Critical Pedagogy (Bloomsbury Academic, 2011), 5.

A bag
but is
language
nothing
of words
(language is nothing but a bag of words)
MICHAEL MURTAUGH

In text indexing and other machine reading applications the term "bag of
words" is frequently used to underscore how processing algorithms often
represent text using a data structure (word histograms or weighted vectors)
where the original order of the words in sentence form is stripped away. While
"bag of words" might well serve as a cautionary reminder to programmers of
the essential violence perpetrated to a text and a call to critically question the
efficacy of methods based on subsequent transformations, the expression's use
seems in practice more like a badge of pride or a schoolyard taunt that would
go: Hey language: you're nothin' but a big BAG-OF-WORDS.
BAG OF WORDS

In information retrieval and other so-called machine-reading applications (such as text
indexing for web search engines) the term "bag of words" is used to underscore how in the
course of processing a text the original order of the words in sentence form is stripped away.
The resulting representation is then a collection of each unique word used in the text,
typically weighted by the number of times the word occurs.
Bag of words, also known as word histograms or weighted term vectors, are a standard part
of the data engineer's toolkit. But why such a drastic transformation? The utility of "bag of
words" is in how it makes text amenable to code, first in that it's very straightforward to
implement the translation from a text document to a bag of words representation. More

P.66

P.67

significantly, this transformation then opens up a wide collection of tools and techniques for
further transformation and analysis purposes. For instance, a number of libraries available in
the booming field of "data sciences" work with "high dimension" vectors; bag of words is a
way to transform a written document into a mathematical vector where each "dimension"
corresponds to the (relative) quantity of each unique word. While physically unimaginable
and abstract (imagine each of Shakespeare's works as points in a 14 million dimensional
space), from a formal mathematical perspective, it's quite a comfortable idea, and many
complementary techniques (such as principle component analysis) exist to reduce the
resulting complexity.
What's striking about a bag of words representation, given is centrality in so many text
retrieval application is its irreversibility. Given a bag of words representation of a text and
faced with the task of producing the original text would require in essence the "brain" of a
writer to recompose sentences, working with the patience of a devoted cryptogram puzzler to
draw from the precise stock of available words. While "bag of words" might well serve as a
cautionary reminder to programmers of the essential violence perpetrated to a text and a call
to critically question the efficacy of methods based on subsequent transformations, the
expressions use seems in practice more like a badge of pride or a schoolyard taunt that would
go: Hey language: you're nothing but a big BAG-OF-WORDS. Following this spirit of the
term, "bag of words" celebrates a perfunctory step of "breaking" a text into a purer form
amenable to computation, to stripping language of its silly redundant repetitions and foolishly
contrived stylistic phrasings to reveal a purer inner essence.
BOOK OF WORDS

Lieber's Standard Telegraphic Code, first published in 1896 and republished in various
updated editions through the early 1900s, is an example of one of several competing systems
of telegraph code books. The idea was for both senders and receivers of telegraph messages
to use the books to translate their messages into a sequence of code words which can then be
sent for less money as telegraph messages were paid by the word. In the front of the book, a
list of examples gives a sampling of how messages like: "Have bought for your account 400
bales of cotton, March delivery, at 8.34" can be conveyed by a telegram with the message
"Ciotola, Delaboravi". In each case the reduction of number of transmitted words is
highlighted to underscore the efficacy of the method. Like a dictionary or thesaurus, the book
is primarily organized around key words, such as act, advice, affairs, bags, bail, and bales,
under which exhaustive lists of useful phrases involving the corresponding word are provided
in the main pages of the volume. [1]

P.68

P.69

P.70

P.71

[...] my focus in this chapter is on the inscription technology that grew parasitically
alongside the monopolistic pricing strategies of telegraph companies: telegraph code
books. Constructed under the bywords “economy,” “secrecy,” and “simplicity,”

telegraph code books matched phrases and words with code letters or numbers. The
idea was to use a single code word instead of an entire phrase, thus saving money by
serving as an information compression technology. Generally economy won out over
[2]
secrecy, but in specialized cases, secrecy was also important.

In Katherine Hayles' chapter devoted to telegraph code books she observes how:
The interaction between code and language shows a steady movement away from a
human-centric view of code toward a machine-centric view, thus anticipating the
[3]
development of full-fledged machine codes with the digital computer.

Aspects of this transitional moment are apparent in a notice included prominently inserted in
the Lieber's code book:
After July, 1904, all combinations of letters that do not exceed ten will pass as one
cipher word, provided that it is pronounceable, or that it is taken from the following
languages: English, French, German, Dutch, Spanish, Portuguese or Latin -[4]
International Telegraphic Conference, July 1903

Conforming to international conventions regulating telegraph communication at that time, the
stipulation that code words be actual words drawn from a variety of European languages
(many of Lieber's code words are indeed arbitrary Dutch, German, and Spanish words)

P.72

P.73

underscores this particular moment of transition as reference to the human body in the form
of "pronounceable" speech from representative languages begins to yield to the inherent
potential for arbitrariness in digital representation.
What telegraph code books do is remind us of is the relation of language in general to
economy. Whether they may be economies of memory, attention, costs paid to a
telecommunicatons company, or in terms of computer processing time or storage space,
encoding language or knowledge in any form of writing is a form of shorthand and always
involves an interplay with what one expects to perform or "get out" of the resulting encoding.
Along with the invention of telegraphic codes comes a paradox that John Guillory has
noted: code can be used both to clarify and occlude. Among the sedimented structures
in the technological unconscious is the dream of a universal language. Uniting the
world in networks of communication that flashed faster than ever before, telegraphy
was particularly suited to the idea that intercultural communication could become
almost effortless. In this utopian vision, the effects of continuous reciprocal causality
expand to global proportions capable of radically transforming the conditions of human
[5]
life. That these dreams were never realized seems, in retrospect, inevitable.

P.74

P.75

Far from providing a universal system of encoding messages in the English language,
Lieber's code is quite clearly designed for the particular needs and conditions of its use. In
addition to the phrases ordered by keywords, the book includes a number of tables of terms
for specialized use. One table lists a set of words used to describe all possible permutations of
numeric grades of coffee (Choliam = 3,4, Choliambos = 3,4,5, Choliba = 4,5, etc.); another
table lists pairs of code words to express the respective daily rise or fall of the price of coffee
at the port of Le Havre in increments of a quarter of a Franc per 50 kilos ("Chirriado =
prices have advanced 1 1/4 francs"). From an archaeological perspective, the Lieber's code
book reveals a cross section of the needs and desires of early 20th century business
communication between the United States and its trading partners.
The advertisements lining the Liebers Code book further situate its use and that of
commercial telegraphy. Among the many advertisements for banking and law services, office
equipment, and alcohol are several ads for gun powder and explosives, drilling equipment
and metallurgic services all with specific applications to mining. Extending telegraphy's
formative role for ship-to-shore and ship-to-ship communication for reasons of safety,
commercial telegraphy extended this network of communication to include those parties
coordinating the "raw materials" being mined, grown, or otherwise extracted from overseas
sources and shipped back for sale.

"RAW DATA NOW!"
Tim Berners-Lee: [...] Make a beautiful website, but
first give us the unadulterated data, we want the data.
We want unadulterated data. OK, we have to ask for
raw data now. And I'm going to ask you to practice
that, OK? Can you say "raw"?
Audience: Raw.
Tim Berners-Lee: Can you say "data"?
Audience: Data.
TBL: Can you say "now"?
Audience: Now!
TBL: Alright, "raw data now"!
[...]

From La ville intelligente - Ville de la
connaissance:
Étant donné que les nouvelles formes
modernistes et l'utilisation de
matériaux propageaient l'abondance
d'éléments décoratifs, Paul Otlet
croyait en la possibilité du langage
comme modèle de « données brutes »,
le réduisant aux informations
essentielles et aux faits sans ambiguïté,
tout en se débarrassant de tous les
éléments inefficaces et subjectifs.
From The Smart City - City of
Knowledge:
As new modernist forms and use of
materials propagated the abundance
of decorative elements, Otlet believed
in the possibility of language as a
model of 'raw data', reducing it to
essential information and
unambiguous facts, while removing all
inefficient assets of ambiguity or
subjectivity.

So, we're at the stage now where we have to do this -the people who think it's a great idea. And all the
people -- and I think there's a lot of people at TED
who do things because -- even though there's not an
immediate return on the investment because it will only really pay off when everybody
else has done it -- they'll do it because they're the sort of person who just does things
which would be good if everybody else did them. OK, so it's called linked data. I want
[6]
you to make it. I want you to demand it.
UN/STRUCTURED

As graduate students at Stanford, Sergey Brin and Lawrence (Larry) Page had an early
interest in producing "structured data" from the "unstructured" web. [7]
The World Wide Web provides a vast source of information of almost all types,
ranging from DNA databases to resumes to lists of favorite restaurants. However, this
information is often scattered among many web servers and hosts, using many different
formats. If these chunks of information could be extracted from the World Wide Web
and integrated into a structured form, they would form an unprecedented source of
information. It would include the largest international directory of people, the largest
and most diverse databases of products, the greatest bibliography of academic works,
and many other useful resources. [...]

P.76

P.77

2.1 The Problem
Here we define our problem more formally:
Let D be a large database of unstructured information such as the World Wide Web
[8]
[...]

In a paper titled Dynamic Data Mining Brin and Page situate their research looking for rules
(statistical correlations) between words used in web pages. The "baskets" they mention stem
from the origins of "market basket" techniques developed to find correlations between the
items recorded in the purchase receipts of supermarket customers. In their case, they deal
with web pages rather than shopping baskets, and words instead of purchases. In transitioning
to the much larger scale of the web, they describe the usefulness of their research in terms of
its computational economy, that is the ability to tackle the scale of the web and still perform
using contemporary computing power completing its task in a reasonably short amount of
time.
A traditional algorithm could not compute the large itemsets in the lifetime of the
universe. [...] Yet many data sets are difficult to mine because they have many
frequently occurring items, complex relationships between the items, and a large
number of items per basket. In this paper we experiment with word usage in documents
on the World Wide Web (see Section 4.2 for details about this data set). This data set
is fundamentally different from a supermarket data set. Each document has roughly
150 distinct words on average, as compared to roughly 10 items for cash register
transactions. We restrict ourselves to a subset of about 24 million documents from the
web. This set of documents contains over 14 million distinct words, with tens of
thousands of them occurring above a reasonable support threshold. Very many sets of
[9]
these words are highly correlated and occur often.
UN/ORDERED

In programming, I've encountered a recurring "problem" that's quite symptomatic. It goes
something like this: you (the programmer) have managed to cobble out a lovely "content
management system" (either from scratch, or using any number of helpful frameworks)
where your user can enter some "items" into a database, for instance to store bookmarks.
After this ordered items are automatically presented in list form (say on a web page). The
author: It's great, except... could this bookmark come before that one? The problem stems
from the fact that the database ordering (a core functionality provided by any database)
somehow applies a sorting logic that's almost but not quite right. A typical example is the
sorting of names where details (where to place a name that starts with a Norwegian "Ø" for
instance), are language-specific, and when a mixture of languages occurs, no single ordering
is necessarily "correct". The (often) exascerbated programmer might hastily add an
additional database field so that each item can also have an "order" (perhaps in the form of a
date or some other kind of (alpha)numerical "sorting" value) to be used to correctly order
the resulting list. Now the author has a means, awkward and indirect but workable, to control

the order of the presented data on the start page. But one might well ask, why not just edit
the resulting listing as a document? Not possible! Contemporary content management
systems are based on a data flow from a "pure" source of a database, through controlling
code and templates to produce a document as a result. The document isn't the data, it's the
end result of an irreversible process. This problem, in this and many variants, is widespread
and reveals an essential backwardness that a particular "computer scientist" mindset relating
to what constitutes "data" and in particular it's relationship to order that makes what might be
a straightforward question of editing a document into an over-engineered database.
Recently working with Nikolaos Vogiatzis whose research explores playful and radically
subjective alternatives to the list, Vogiatzis was struck by how from the earliest specifications
of HTML (still valid today) have separate elements (OL and UL) for "ordered" and
"unordered" lists.
The representation of the list is not defined here, but a bulleted list for unordered lists,
and a sequence of numbered paragraphs for an ordered list would be quite appropriate.
[10]
Other possibilities for interactive display include embedded scrollable browse panels.

Vogiatzis' surprise lay in the idea of a list ever being considered "unordered" (or in
opposition to the language used in the specification, for order to ever be considered
"insignificant"). Indeed in its suggested representation, still followed by modern web
browsers, the only difference between the two visually is that UL items are preceded by a
bullet symbol, while OL items are numbered.
The idea of ordering runs deep in programming practice where essentially different data
structures are employed depending on whether order is to be maintained. The indexes of a
"hash" table, for instance (also known as an associative array), are ordered in an
unpredictable way governed by a representation's particular implementation. This data
structure, extremely prevalent in contemporary programming practice sacrifices order to offer
other kinds of efficiency (fast text-based retrieval for instance).
DATA MINING

In announcing Google's impending data center in Mons, Belgian prime minister Di Rupo
invoked the link between the history of the mining industry in the region and the present and
future interest in "data mining" as practiced by IT companies such as Google.
Whether speaking of bales of cotton, barrels of oil, or bags of words, what links these subjects
is the way in which the notion of "raw material" obscures the labor and power structures
employed to secure them. "Raw" is always relative: "purity" depends on processes of
"refinement" that typically carry social/ecological impact.

P.78

P.79

Stripping language of order is an act of "disembodiment", detaching it from the acts of writing
and reading. The shift from (human) reading to machine reading involves a shift of
responsibility from the individual human body to the obscured responsibilities and seemingly
inevitable forces of the "machine", be it the machine of a market or the machine of an
algorithm.
The computer scientists' view of textual content as
"unstructured", be it in a webpage or the OCR scanned
pages of a book, reflect a negligence to the processes and
labor of writing, editing, design, layout, typesetting, and
eventually publishing, collecting and cataloging [11].

From X = Y:
Still, it is reassuring to know that the
products hold traces of the work, that
even with the progressive removal of
human signs in automated processes,
the workers' presence never
disappears completely. This presence
is proof of the materiality of
information production, and becomes
a sign of the economies and
paradigms of efficiency and
profitability that are involved.

"Unstructured" to the computer scientist, means nonconformant to particular forms of machine reading.
"Structuring" then is a social process by which particular
(additional) conventions are agreed upon and employed.
Computer scientists often view text through the eyes of
their particular reading algorithm, and in the process
(voluntarily) blind themselves to the work practices which have produced and maintain these
"resources".
Berners-Lee, in chastising his audience of web publishers to not only publish online, but to
release "unadulterated" data belies a lack of imagination in considering how language is itself
structured and a blindness to the need for more than additional technical standards to connect
to existing publishing practices.
Last
Revision:
2·08·2016

1. Benjamin Franklin Lieber, Lieber's Standard Telegraphic Code, 1896, New York; https://archive.org/details/
standardtelegrap00liebuoft
2. Katherine Hayles, "Technogenesis in Action: Telegraph Code Books and the Place of the Human", How We Think: Digital
Media and Contemporary Technogenesis, 2006
3. Hayles
4. Lieber's
5. Hayles
6. Tim Berners-Lee: The next web, TED Talk, February 2009 http://www.ted.com/talks/tim_berners_lee_on_the_next_web/
transcript?language=en
7. "Research on the Web seems to be fashionable these days and I guess I'm no exception." from Brin's Stanford webpage
8. Extracting Patterns and Relations from the World Wide Web, Sergey Brin, Proceedings of the WebDB Workshop at EDBT
1998, http://www-db.stanford.edu/~sergey/extract.ps
9. Dynamic Data Mining: Exploring Large Rule Spaces by Sampling; Sergey Brin and Lawrence Page, 1998; p. 2 http://
ilpubs.stanford.edu:8090/424/
10. Hypertext Markup Language (HTML): "Internet Draft", Tim Berners-Lee and Daniel Connolly, June 1993, http://
www.w3.org/MarkUp/draft-ietf-iiir-html-01.txt
11. http://informationobservatory.info/2015/10/27/google-books-fair-use-or-anti-democratic-preemption/#more-279

A Book
of the
Web
DUSAN BAROK

Is there a vital difference between publishing in print versus online other than
reaching different groups of readers and a different lifespan? Both types of texts
are worth considering preserving in libraries. The online environment has
created its own hybrid form between text and library, which is key to
understanding how digital text produces difference.
Historically, we have been treating texts as discrete units, that are distinguished by their
material properties such as cover, binding, script. These characteristics establish them as
either a book, a magazine, a diary, sheet music and so on. One book differs from another,
books differ from magazines, printed matter differs from handwritten manuscripts. Each
volume is a self-contained whole, further distinguished by descriptors such as title, author,
date, publisher, and classification codes that allow it to be located and referred to. The
demarcation of a publication as a container of text works as a frame or boundary which
organises the way it can be located and read. Researching a particular subject matter, the
reader is carried along by classification schemes under which volumes are organised, by
references inside texts, pointing to yet other volumes, and by tables of contents and indexes of
subjects that are appended to texts, pointing to places within that volume.
So while their material properties separate texts into distinct objects, bibliographic information
provides each object with a unique identifier, a unique address in the world of print culture.
Such identifiable objects are further replicated and distributed across containers that we call
libraries, where they can be accessed.
The online environment however, intervenes in this condition. It establishes shortcuts.
Through search engine, digital texts can be searched for any text sequence, regardless of
their distinct materiality and bibliographic specificity. This changes the way they function as a
library, and the way its main object, the book, should be rethought.
(1) Rather than operate as distinct entities, multiple texts are simultaneously accessible
through full-text search as if they are one long text, with its portions spread across the

P.80

P.81

web, and including texts that had not been considered as candidates for library
collections.
(2) The unique identifier at hand for these text portions is not the bibliographic
information, but the URL.
(3) The text is as long as web-crawlers of a given search engine are set to reach,
refashioning the library into a storage of indexed data.

These are some of the lines along which online texts appear to produce difference. The first
contrasts the distinct printed publication to the machine-readable text, the second the
bibliographic information to the URL, and the third the library to the search engine.
The introduction of full-text search has created an
environment in which all machine-readable online
documents in reach are effectively treated as one single
document. For any text-sequence to be locatable, it
doesn't matter in which file format it appears, nor whether
its interface is a database-powered website or mere
directory listing. As long as text can be extracted from a
document, it is a container of text sequences which itself
is a sequence in a 'book' of the web.
Even though this is hardly news after almost two decades
of Google Search ruling, little seems to have changed
with respect to the forms and genres of writing. Loyal to
standard forms of publishing, most writing still adheres to
the principle of coherence, based on units such as book
chapters, journal papers, newspaper articles, etc., that are
designed to be read from beginning to end.

From Voor elk boek is een gebruiker:
FS: Maar het gaat toch ook over de
manier waarop jullie toegang bieden,
de bibliotheek als interface? Online
laten jullie dat nu over aan Google.
SVP: De toegang gaat niet meer
over: “deze instelling heeft dit, deze
instelling heeft iets anders”, al die
instellingen zijn via dezelfde interface
te bereiken. Je kan doorheen al die
collecties zoeken en dat is ook weer
een stukje van die originele droom van
Otlet en Vander Haeghen, het idee
van een wereldbibliotheek. Voor elk
boek is er een gebruiker, de
bibliotheek moet die maar gaan
zoeken.
Wat ik intrigerend vind is dat alle
boeken één boek geworden zijn
doordat ze op hetzelfde niveau
doorzoekbaar zijn, dat is ongelooflijk
opwindend. Dat is een andere manier
van lezen die zelfs Otlet zich niet had
kunnen voorstellen. Ze zouden zot
worden moesten ze dit weten.

Still, the scope of textual forms appearing in search
results, and thus a corpus of texts in which they are being
brought into, is radically diversified: it may include
discussion board comments, product reviews, private emails, weather information, spam etc., the type of content
that used to be omitted from library collections. Rather than being published in a traditional
sense, all these texts are produced onto digital networks by mere typing, copying, OCR-ing,
generated by machines, by sensors tracking movement, temperature, etc.
Even though portions of these texts may come with human or non-human authors attached,
authors have relatively little control over discourses their writing gets embedded in. This is
also where the ambiguity of copyright manifests itself. Crawling bots pre-read the internet
with all its attached devices according to the agenda of their maintainers, and the decisions

about which, how and to whom the indexed texts are served in search results is in the code of
a library.
Libraries in this sense are not restricted to digitised versions of physical public or private
libraries as we know them from history. Commercial search engines, intelligence agencies,
and virtually all forms of online text collections can be thought of as libraries.
Acquisition policies figure here on the same level with crawling bots, dragnet/surveillance
algorithms, and arbitrary motivations of users, all of which actuate the selection and
embedding of texts into structures that regulate their retrievability and through access control
produce certain kinds of communities or groups of readers. The author's intentions of
partaking in this or that discourse are confronted by discourse-conditioning operations of
retrieval algorithms. Hence, Google structures discourse through its Google Search
differently from how the Internet Archive does with its Wayback Machine, and from how the
GCHQ does it with its dragnet programme.
They are all libraries, each containing a single 'book' whose pages are URLs with
timestamps and geostamps in the form of IP address. Google, GCHQ, JStor, Elsevier –
each maintains its own searchable corpus of texts. The decisions about who, to which
sections and under which conditions is to be admitted are
From Amateur Librarian - A Course
informed by a mix of copyright laws, corporate agendas,
in Critical Pedagogy:
management hierarchies, and national security issues.
As books became more easily massVarious sets of these conditions that are at work in a
produced, the commercial
subscription libraries catering to the
particular library, also redefine the notion of publishing
better-off parts of society blossomed.
and of the publication, and in turn the notion of public.
This brought the class aspect of the
Corporate journal repositories exploit publicly funded
research by renting it only to libraries which can afford it;
intelligence agencies are set to extract texts from any
moving target, basically any networked device, apparently
in public interest and away from the public eye; publiclyfunded libraries are being prevented by outdated
copyright laws and bureaucracy from providing digitised
content online; search engines create a sense of giving
access to all public record online while only a few know
what is excluded and how search results are ordered.

P.82

nascent demand for public access to
books to the fore.
From Bibliothécaire amateur - un
cours de pédagogie critique:
Puisqu'il était de plus en plus facile de
produire des livres en masse, les
bibliothèques privées payantes, au
service des catégories privilégiées de
la société, ont commencé à se
répandre. Ce phénomène a mis en
relief la question de la classe dans la
demande naissante pour un accès
public aux livres.

P.83

It is within and against this milieu that libraries such as
the Internet Archive, Wikileaks, Aaaaarg, UbuWeb,
Monoskop, Memory of the World, Nettime, TheNextLayer
and others gain their political agency. Their countertechniques for negotiating the publicness of publishing
include self-archiving, open access, book liberation,
leaking, whistleblowing, open source search algorithms
and so on.
Digitization and posting texts online are interventions in
the procedures that make search possible. Operating
online collections of texts is as much about organising
texts within libraries, as is placing them within books of
the web.

Originally written 15-16 June 2015 in Prague, Brno
and Vienna for a talk given at the Technopolitics seminar in Vienna on 16 June 2015.
Revised 29 December 2015 in Bergen.
Last
Revision:
1·08·2016

The
Indexalist
MATTHEW FULLER

I first spoke to the patient in the last week of that August. That evening the sun was tender in
drawing its shadows across the lines of his face. The eyes gazed softly into a close middle
distance, as if composing a line upon a translucent page hung in the middle of the air, the
hands tapping out a stanza or two of music on legs covered by the brown folds of a towelling
dressing gown. He had the air of someone who had seen something of great amazement but
yet lacked the means to put it into language. As I got to know the patient over the next few
weeks I learned that this was not for the want of effort.
In his youth he had dabbled with the world-speak
language Volapük, one designed to do away with the
incompatibility of tongues, to establish a standard in
which scientific intercourse might be conducted with
maximum efficiency and with minimal friction in
movement between minds, laboratories and publications.
Latin biological names, the magnificent table of elements,
metric units of measurement, the nomenclature of celestial
objects from clouds to planets, anatomical parts and
medical conditions all had their own systems of naming
beyond any specific tongue. This was an attempt to bring
reason into speech and record, but there were other
means to do so when reality resisted these early
measures.

The dabbling, he reflected, had become a little more than
that. He had subscribed to journals in the language, he
wrote letters to colleagues and received them in return. A
few words of world-speak remained readily on his tongue, words that he spat out regularly
into the yellow-wallpapered lounge of the sanatorium with a disgust that was lugubriously
palpable.
According to my records, and in piecing together the notes of previous doctors, there was
something else however, something more profound that the language only hinted at. Just as
the postal system did not require the adoption of any language in particular but had its

P.84

P.85

formats that integrated them into addressee, address line, postal town and country, something
that organised the span of the earth, so there was a sense of the patient as having sustained
an encounter with a fundamental form of organisation that mapped out his soul. More thrilling
than the question of language indeed was that of the system of organisation upon which
linguistic symbols are inscribed. I present for the reader’s contemplation some statements
typical of those he seemed to mull over.
“The index card system spoke to my soul. Suffice it to say that in its use I enjoyed the
highest form of spiritual pleasure, and organisational efficiency, a profound flowering of
intellect in which every thought moved between its enunciation, evidence, reference and
articulation in a mellifluous flow of ideation and the gratification of curiosity.” This sense of
the soul as a roving enquiry moving across eras, across forms of knowledge and through the
serried landscapes of the vast planet and cosmos was returned to over and over, a sense that
an inexplicable force was within him yet always escaping his touch.
“At every reference stood another reference, each more
interesting than the last. Each the apex of a pyramid of
further reading, pregnant with the threat of digression,
each a thin high wire which, if not observed might lead
the author into the fall of error, a finding already found
against and written up.” He mentions too, a number of
times, the way the furniture seemed to assist his thoughts
- the ease of reference implied by the way in which the
desk aligned with the text resting upon the pages of the
off-print, journal, newspaper, blueprint or book above
which further drawers of cards stood ready in their
cabinet. All were integrated into the system. And yet,
amidst these frenetic recollections there was a note of
mourning in his contemplative moods, “The superposition
of all planes of enquiry and of thought in one system
repels those for whom such harmonious speed is
suspicious.” This thought was delivered with a stare that
was not exactly one of accusation, but that lingered with
the impression that there was a further statement to follow
it, and another, queued up ready to follow.

As I gained the trust of the patient, there was a sense in
which he estimated me as something of a junior
collaborator, a clerk to his natural role as manager. A
lucky, if slightly doubtful, young man whom he might
mentor into efficiency and a state of full access to
information. For his world, there was not the corruption and tiredness of the old methods.
Ideas moved faster in his mind than they might now across the world. To possess a register of

thoughts covering a period of some years is to have an asset, the value of which is almost
incalculable. That it can answer any question respecting any thought about which one has
had an enquiry is but the smallest of its merits. More important is the fact that it continually
calls attention to matters requiring such attention.
Much of his discourse was about the optimum means of arrangement of the system, there
was an art to laying out the cards. As the patient further explained, to meet the objection that
loose cards may easily be mislaid, cards may be tabbed with numbers from one to ten. When
arranged in the drawer, these tabs proceed from left to right across the drawer and the
absence of a single card can thus easily be detected. The cards are further arranged between
coloured guide cards. As an alternative to tabbed cards, signal flags may be used. Here,
metal clips may be attached to the top end of the card and that stand out like guides. For use
of the system in relation to dates of the month, the card is printed with the numbers 1 to 31
at the top. The metal clip is placed as a signal to indicate the card is to receive attention on
the specified day. Within a large organisation a further card can be drawn up to assign
responsibility for processing that date’s cards. There were numerous means of working the
cards, special techniques for integrating them into any type of research or organisation, means
by which indexes operating on indexes could open mines of information and expand the
knowledge and capabilities of mankind.
As he pressed me further, I began to experiment with such methods myself by withdrawing
data from the sanatorium’s records and transferring it to cards in the night. The advantages of
the system are overwhelming. Cards, cut to the right mathematical degree of accuracy,
arrayed readily in drawers, set in cabinets of standard sizes that may be added to at ease,
may be apportioned out amongst any number of enquirers, all of whom may work on them
independently and simultaneously. The bound book, by contrast, may only be used by one
person at a time and that must stay upon a shelf itself referred to by an index card system. I
began to set up a structure of rows of mirrors on chains and pulleys and a set of levered and
hinged mechanical arms to allow me to open the drawers and to privately consult my files
from any location within the sanatorium. The clarity of the image is however so far too much
effaced by the diffusion of light across the system.
It must further be borne in mind that a system thus capable of indefinite expansion obviates
the necessity for hampering a researcher with furniture or appliances of a larger size than are
immediately required. The continuous and orderly sequence of the cards may be extended
further into the domain of furniture and to the conduct of business and daily life. Reasoning,
reference and the order of ideas emerging as they embrace and articulate a chaotic world and
then communicate amongst themselves turning the world in turn into something resembling
the process of thought in an endless process of consulting, rephrasing, adding and sorting.
For the patient, ideas flowed like a force of life, oblivious to any unnatural limitation. Thought
became, with the proper use of the system, part of the stream of life itself. Thought moved
through the cards not simply at the superficial level of the movement of fingers and the
mechanical sliding and bunching of cards, but at the most profound depths of the movement

P.86

P.87

between reality and our ideas of it. The organisational grace to be found in arrangement,
classification and indexing still stirred the remnants of his nervous system until the last day.
Last
Revision:
2·08·2016

P.138

P.139

An
experimental
transcript
SÎNZIANA PĂLTINEANU

Note: The editor has had the good fortune of finding a whole box of
handwritten index cards and various folded papers (from printed screenshots to
boarding passes) in the storage space of an institute. Upon closer investigation,
it has become evident that the mixed contents of the box make up one single
document. Difficult to decipher due to messy handwriting, the manuscript
poses further challenges to the reader because its fragments lack a preestablished order. Simply uploading high-quality facsimile images of the box
contents here would not solve the problems of legibility and coherence. As an
intermediary solution, the editor has opted to introduce below a selection of
scanned images and transcribed text from the found box. The transcript is
intended to be read as a document sample, as well as an attempt at manuscript
reconstruction, following the original in the author's hand as closely as possible:
pencilled in words in the otherwise black ink text are transcribed in brackets,
whereas curly braces signal erasures, peculiar marks or illegible parts on the
index cards. Despite shifts in handwriting styles, whereby letters sometimes
appear extremely rushed and distorted in multiple idiosyncratic ways, the
experts consulted unanimously declared that the manuscript was most likely
authored by one and the same person. To date, the author remains unknown.
Q

I've been running with a word in my mouth, running with this burning untitled shape, and I
just can't spit it out. Spit it with phlegm from a balcony, kiss it in a mirror, brush it away one
morning. I've been running with a word in my mouth, running...

… it must have been only last month that I began half-chanting-half-mumbling this looped
sequence of sentences on the staircase I regularly take down to work and back up to dream,
yet it feels as if it were half a century ago. Tunneling through my memory, my tongue begins
burning again and so I recollect that the subject matter was an agonizing, unutterable
obsession I needed to sort out most urgently. Back then I knew no better way than to keep
bringing it up obliquely until it would chemically dissolve itself into my blood or evaporate
through the pores of my skin. To whisper the obsession away, I thought not entirely so
naïvely, following a peculiar kind of vengeful logic, by emptying words of their pocket
contents on a spiraling staircase. An anti-incantation, a verbal overdose, a semantic dilution or
reduction – for the first time, I was ready to inflict harm on words! [And I am sure, the
thought has crossed other lucid minds, too.]
N

During the first several days, as I was rushing up and down the stairs like a Tasmanian devil,
swirling those same sentences in my expunction ritual, I hardly noticed that the brown
marbled staircase had a ravenous appetite for all my sound making and fuss: it cushioned the
clump of my footsteps, it absorbed the vibrations of my vocal chords and of my fingers
drumming on the handrail. All this unusual business must have carried on untroubled for
some time until that Wed. [?] morning when I tried approaching the employee at the
reception desk in the hideously large building where I live with a question about elevator
safety. I may take the elevator once in a blue moon, but I could not ignore the new
disquieting note I had been reading on all elevator doors that week:
m a k e / s u r e / t h e / e l e v a t o r / c a r / i s / s t a t i o n e d / o n / y o u r / f l
o o r / b e f o r e / s t e p p i n g / i n

T

P.140

P.141

Walking with a swagger, I entered the incandescent light field around the fancy semicircular,
brown reception desk, pressed down my palms on it, bent forward and from what I found to
be a comfortable inquiry angle, launched question mark after question mark: “Is everything
alright with the elevators? Do you know how worrisome I find the new warning on the
elevator doors? Has there been an accident? Or is this simply an insurance disclaimer-trick?”
Too many floors, too many times reading the same message against my will, must have
inflated my concern, so I breathed out the justification of my anxiety and waited for a
reassuring head shake to erase the imprint of the elevator shaft from my mind. Oddly, not the
faintest or most bored acknowledgment of my inquiry or presence came from across the desk.
From where I was standing, I performed a quick check to see if any cables came out of the
receptionist's ears. Nothing. Channels unobstructed, no ear mufflers, no micro-devices.
Suspicion eliminated, I waved at him, emitted a few other sounds – all to no avail. My
tunnel-visioned receptionist rolled his chair even closer to one of the many monitors under his
hooked gaze, his visual field now narrowed to a very acute angle, sheltered by his high desk.
How well I can still remember that at that exact moment I wished my face would turn into
the widest, most expensive screen, with an imperative, hairy ticker at the bottom –
h e y t o u c h m y s c r e e n m y m u s t a c h e s c r e e n e l e v a t o r t o u c h d o w n s
c r e a m

J

That's one of the first red flags I remember in this situation (here, really starting to come
across more or less as a story): a feeling of being silenced by the building I inhabited. [Or to
think about it the other way around: it's also plausible and less paranoid that upon hearing
my flash sentences the building manifested a sense of phonophobia and consequently
activated a strange defense mechanism. In any case, t]hat day, I had been forewarned, but I
failed to understand. As soon as I pushed the revolving door and left the building with a wry
smile [on my face], the traffic outside wolfed down the warning.
E

The day I resigned myself to those forces – and I assume, I had unleashed them upon myself
through my vengeful desire to hxxx {here, a 3-cm erasure} words until I could see carcass
after carcass roll down the stairs [truth be said, a practice that differed from other people's
doings only in my heightened degree of awareness, which entailed a partially malevolent but
perhaps understandable defensive strategy on my part] – that gloomy day, the burning
untitled shape I had been carrying in my mouth morphed into a permanent official of my
cavity – a word implant in my jaw! No longer do I feel pain on my tongue, only a tinge of
volcanic ash as an aftermath of this defeat.

U

I've been running with a word in my mouth, running with this burning untitled shape, and I
just can't spit it out. Spit it with phlegm from a balcony, kiss it in a mirror, brush it away one
morning. It has become my tooth, rooted in my nervous system. My word of mouth.
P

Since then, my present has turned into an obscure hole, and I can't climb out of it. Most of
the time, I'm sitting at the bottom of this narrow oubliette, teeth in knees, scribbling notes with
my body in a terribly twisted position. And when I'm not sitting, I'm forced to jump.
Agonizing thoughts numb my limbs so much so that I feel my legs turning to stone. On some
days I look up, terrified. I can't even make out whether the diffuse opening is egg- or squareshaped, but there's definitely a peculiar tic-tac sequence interspersed with neighs that my
pricked ears are picking up on. A sound umbrella, hovering somewhere up there, high above
my imploded horizon.
{illegible vertical lines resembling a bar code}
Hypotheses scanned and merged, I temporarily conclude that a horse-like creature with
metal intestines must be galloping round and round the hole I'm in. When I first noticed the
sound, its circular cadence was soft and unobtrusive, almost protective, but now the more laps
the clock-horse is running, the deeper the ticking and the neighing sounds are drilling into the
hole. I picture this as an ever rotating metal worm inside a mincing machine. If I point my
chin up, it bores through my throat!
B

P.142

P.143

What if, in returning to that red flag in my reconstructive undertaking [instead of “red flag”,
whose imperialist connotations strike me today, we cross it out and use “pyramid” to refer to
such potentially revealing frames, when intuitions {two words crossed out, but still legible:
seem to} give the alarm and converge before thoughts do], we posit that an elevator accident
occurred not long after my unanswered query at the High Reception Desk, and that I –
exceptionally – found myself in the elevator car that plummeted. Following this not entirely
bleak hypothesis, the oubliette I'm trapped in translates to an explainable state of blackout
and all the ticking and the drilling could easily find their counterparts in the host of medical
devices (and their noise-making) that support a comatose person. What if what I am
experiencing now is another kind of awareness, inside a coma, which will be gone once I
wake up in a few hours or days on a hospital bed, flowers by my side, someone crying / loud
as a horse / in the other corner of the room, next to a child's bed?
[Plausible as this scenario might be, it's still strange how the situation calls for reality-like
insertions to occur through “what if”s...]
H

Have I fallen into a lucid coma or am I a hallucination, made in 1941 out of gouache and
black pencil, paper, cardboard and purchased in 1966?
[To visualize the equation of my despair, the following elements are given: the abovewhispered question escalates into a desperate shout and multiplies itself over a considerable
stretch of time at the expense of my vocal chords. After all, I am not made of black pencil or
cardboard or paper. Despite this conclusion, the effort has left me sulking for hours without
being able to scribble anything, overwhelmed by a sensation of being pinched and pulled
sideways by dark particles inside the mineral dampness of this open tomb. What's the use of
a vertical territory if you can't sniff it all the way up?]
{several overlapping thumbmarks in black ink, lower right corner}
W

/ one gorgeous whale \
my memory's biomorphic shadow
can anyone write in woodworm language?

how to teach the Cyrillic alphabet to woodworms?
how many hypotheses to /re-stabilize\ one's situation?
how many pyramids one on top of the other to the \coma/ surface?
the denser the pyramid net, the more confusing the situation. true/false\fiction

O

Hasty recordings of several escape attempts. A slew of tentacle-thoughts are rising towards
the ethereal opening and here I am / hopeful and unwashed \ just beneath a submundane
landscape of groping, shimmering arms, hungry to sense and to collect every memory detail in
an effort of sense making, to draw skin over hypotheses and hypotheses over bones. It might
be morning, it might be yesterday's morning out there or any other time in the past, when as I
cracked the door to my workplace, I entered my co-workers' question game and paraverbal
exchange:
Puckered lips open: “Listen, whose childhood dream was it to have one of their eye-bulbs
replaced with a micro fish-eye lens implant?” Knitted eyebrows: “Someone whose neural
pathways zigzagged phrenologist categories?” Microexpressionist: “How many semioticiandentists and woodworm-writers have visited the Chaos Institute to date?” A ragged mane:
“The same number as the number of neurological tools for brain mapping that the Institute
owns?” {one lengthy word crossed out, probably a name}: “Would your brain topography get
upset and wrinkle if you imagined all the bureaucrats' desks from the largest country on earth
[by pop.] piled up in a pyramid?” Microexpressionist again: “Who wants to draft the call for
asemic writers?” Puckered lips closes {sic} the door.
I

It's a humongous workplace, with a blue entrance door, cluttered with papers on both sides.
See? Left hand on the entrance door handle, the woman presses it and the three of them
[guiding co-worker, faceless cameraman, scarlet-haired interviewer] squeeze themselves

P.144

P.145

inside all that paper. [Door shuts by itself.] Doesn't it feel like entering a paper sculpture? [,
she herself appearing for a split second to have undergone a material transformation, to have
turned into paper, the left side of her face glowing in a retro light. It's still her.] This is where
we work, a hybrid site officially called The Institute for Chaos and Neuroplasticity – packed
with folders, jammed with newspapers, stacks of private correspondence left and right,
recording devices, boxes with photographs, xeroxed documents on shelves, {several pea-sized
inkblots} printed screenshots and boarding passes – we keep it all, everything that museums
or archives have no interest in, all orphaned papers, photographic plates and imperiled books
or hard disks relatives might want to discard or even burn after someone's death. Exploring
leftovers around here can go up and down to horrifying and overwhelming sensorial levels...
Z

{a two-centimeter line of rust from a pin in the upper left corner of the index card}
Sociological-intelligence rumors have it that ours is the bureau for studying psychological
attachment to “garbage” (we very much welcome researchers), while others refer to the
Institute as the chaos-brewing place in the neighborhood because we employ absolutely no
classification method for storing papers or other media. The chances of finding us? [Raised
eyebrows and puckered lips as first responses to the scarlet-haired question.] Well, the
incidence is just as low as finding a document or device you're looking for in our storage.
Things are not lost; there are just different ways of finding them. A random stroll, a lucky find
– be that on-line or off-line –, or a seductive word of mouth may be the entrance points into
this experiential space, a manifesto for haphazardness, emotional intuitions, subversion of
neural pathways, and non-productive attitudes. A dadaist archive? queried Scarlet Hair.
Ours is definitely not an archive, there's no trace of pyramidal bureaucracy or taxonomy
here, no nation state at its birth. Hence you won't find a reservoir for national or racial
histories in here. Just imagine we changed perception scales, imagine a collective cut-up
project that we, chaos workers, are bringing together without scissors or screwdrivers because
all that gets through that blue door [and that is the only condition and standard] has already
been shaped and fits in here. [Guiding co-worker speaks in a monotonous and plain GPS
voice. Interview continues, but she forgets to mention that behind the blue door, in this very
big box 1. everyone is an authorized user and 2. time rests unemployed.]
K

Lately, several trucks loaded with gray matter have been adding extra hours of induced
chaos to everyone's content. Although it is the Institute's policy to accept paper donations
only from private individuals, it occasionally makes exceptions and takes on leftovers from
nonprofit organizations.

Each time this happens, an extended rite of passage follows so as to slightly delay and
thereby ease the arrival of chaos bits: the most reliable chaos worker, Microexpressionist by
metonymically selected feature, supervises the transfer of boxes at the very beginning of a
long hallway [eyeballs moving left to right, head planted in an incredibly stiff neck]. Then,
some fifty meters away, standing in front of the opened blue door, Puckered Lips welcomes
newcomers into the chaos, his gestures those of a marshaller guiding a plane into a parking
position. But once the gray [?] matter has passed over the threshold, once the last full
suitcase or shoe box with USB sticks has landed, directions are no longer provided.
Everyone's free to grow limbs and choose temporal neighbors.
L

… seated cross-legged at the longest desk ever, Ragged Mane is randomly extracting
photodocuments from the freshest chaos segment with a metallic extension of two of her
fingers [instead of a pince-nez, she's the one to carry a pair of tweezers in a small pocket at
all times]. “Look what I've just grabbed,” and she pushes a sepia photograph in front of
Knitted Eyebrows, whose otherwise deadpan face instantaneously gets stamped this time
with a question mark: “What is it?” “Another capture, of course! Two mustaches, one hat,
three pairs of glasses, some blurred figures in the background, and one most fascinating
detail!” – [… takes out a magnifying glass and points with one of her flashy pink fingers to
the handheld object under the gaze of four eyes on the left side of the photo. Then, Ragged
Mane continues:] “That raised right index finger above a rectangular-shaped object... you see
it?” “You mean [00:00 = insertion of a lengthy time frame = 00:47] could this mustachioed
fellow be holding a touchscreen mobile phone in his left hand?” For several unrecorded
skeptical moments, they interlock their eyes and knit their eyebrows closer together.
Afterward, eyes split again and roll on the surface of the photograph like black-eyed peas on
a kitchen table. “It's all specks and epoch details,” a resigned voice breaks from the chaos
silence, when, the same thought crosses their minds, and Ragged Mane and Knitted
Eyebrows turn the photo over, almost certain to find an answer. [A simultaneous hunch.] In
block letters it most clearly reads: “DOCUMENTING THE FILMING OF
PEACEMAKERS / ANALOGUE PHOTOGRAPHY ON FILM SET / BERN,
SWITZERLAND / 17.05.2008”
X

P.146

P.147

/ meanwhile, the clock-horse has grown really nervous out there – it's drawing smaller and
smaller circles / a spasmodic and repetitive activity causing dislocation / a fine powder
begins to float inside the oubliette in the slowest motion possible / my breathing has already
been hampered, but now my lungs and brain get filled with an asphyxiating smell of old
paper / hanging on my last tentacle-thought, on my tiptoes, refusing to choke and disintegrate
/ NOT READY TO BE RECYCLED / {messiest handwriting}
A Cyrillic cityscape is imagining how one day all the bureaucrats' desks from the largest
country on earth get piled up in a pyramid. “This new shape is deflating the coherence of my
horizon. [the cityscape worries] No matter!” Once the last desk is placed at the very top, the
ground cracks a half-open mouth, a fissure the length of Rxssxx. On the outside it's spotted
with straddled city topographies, inside, it's filled with a vernacular accumulation of anational
dust without a trace of usable pasts.
{violent horizontal strokes over the last two lines, left and right from the hole at the bottom of
the index card; indecipherable}
M

“What's on TV this afternoon?” This plain but beautifully metamorphosed question has just
landed with a bleep on the chaos couch, next to Ragged Mane, who usually loses no chance
to retort [that is, here, to admonish too hard a fall]: “Doucement!” Under the weight of a
short-lived feeling of guilt, {name crossed out} echoes back in a whisper – d – o – u – c – e
– m – e – n – t –, and then, as if after a palatable word tasting, she clicks her tongue and
with it, she searches for a point of clarification: “Doucement is an anagram for documenté –
which one do you actually mean?” [All conversations with {name crossed out} would suffer
unsettling Meaning U-turns because she specialized in letter permutation.]

Y

Gurgling sounds from a not-so-distant corner of the chaos dump make heads simultaneously
rotate in the direction of the TV screen, where a documentary has just started with a drone'seye view over a city of lined-up skyscrapers. Early on, the commentator breaks into unwitty
superlatives and platitudes, while the soundtrack unnecessarily dramatizes a 3D layering of
the city structure. Despite all this, the mood on the couch is patient, and viewers seem to
absorb the vignetted film. “A city like no other, as atypical as Cappadocia,” explains the low
trepid voice from the box, “a city whose peculiarity owes first to the alignment of all its
elements, where street follows street in a parallel fashion like in linear writing. Hence, reading
the city acquires a literal dimension, skyscrapers echo clustered block letters on a line, and
the pedestrian reader gets reduced to the size of a far-sighted microbe.”
[Woodworm laughs]
V

Minutes into the documentary, the micro-drone camera zooms into the silver district/chapter
of the city to show another set of its features: instead of steel and glass, what from afar
appeared to be ordinary skyscrapers turn out to be “300-meter-tall lofty towers of mailboxlike constructs of dried skin, sprayed on top with silver paint for rims, and decorated with
huge love padlocks. A foreboding district for newlyweds?” [nauseating atmosphere] Unable
to answer or to smell, the mosquito-sized drone blinks in the direction of the right page, and it
speedily approaches another windowless urban variation: the vastest area of city towers – the
Wood Drawers District. “Despite its vintage (here and there rundown) aura, the area is an
exquisite, segregated space for library aficionados, designed out of genetically-engineered
trees that grow naturally drawer-shaped with a remarkable capacity for self-(re)generation. In
terms of real proportions, the size of a mailbox- or a drawer-apartment is comparable to that

P.148

P.149

of a shipping container, from the alternative but old housing projects…” bla bla the furniture
bla... [that chaos corner, so remote and so coal black / that whole atmosphere with blurred
echoes beclouds my reasoning / and right now, I'm feeling nauseous and cursed with all the
words in an unabridged dictionary / new deluxe edition, with black covers and golden
characters]
D

In front of the place where, above a modest skyline, every single morning [scholars'] desks
conjoin in the shape of a multi-storied pyramid, there's a sign that reads: right here you can
bend forward, place your hands on your back, press down your spine with your thumbs and
throw up an index card, throw out a reality version, take out a tooth. In fact, take out all that
you need and once you feel relieved, exchange personas as if in an emergency situation.
Then, behind vermillion curtains, replace pronouns at will.
[Might this have been a pipe dream? An intubated wish for character replacement? {Name
crossed out} would whisper C E E H I N N O R T as place name]
R

[“gray – …
Other Color Terms –
argentine, cerise, cerulean, cyan, ocher, perse, puce, taupe, vermillion”]
To be able to name everything and everyone, especially all the shades in a gray zone, and
then to re-name, re-narrate/re-count, and re-photograph all of it. To treat the ensuing
multilayered landscape with/as an infinitive verb and to scoop a place for yourself in the
accordion of surfaces. For instance, take the first shot – you're being stared at, you're under
the distant gaze of three {words crossed out; illegible}. Pale, you might think, how pallid and
lifeless they appear to be, but try to hold their gaze and notice how the interaction grows
uncomfortable through persistence. Blink, if you must. Move your weight from one leg to the
other, and become aware of how unflinching their concentration remains, as if their eyes are
lured into a screen. And as you're trying to draw attention to yourself by making ampler,
pantomimic gestures, your hands touch the dark inner edges of the monitor you're [boxed] in.
Look out and around again and again...

G

Some {Same?} damned creature made only of arms and legs has been leaving a slew of
black dots all over my corridors and staircases, ashes on my handrails, and larger spots of
black liquid in front of my elevator doors on the southern track – my oldest and dearest
vertically mobile installation, the one that has grown only ten floors high. If I were in shape,
attuned and wired to my perception angles and sensors, I could identify beyond precision that
it is a 403 cabal plotting I begin fearing. Lately, it's all been going really awry. Having failed
at the character recognition of this trickster creature, the following facts can be enumerated in
view of overall [damage] re-evaluation, quantification, and intruder excision: emaciating
architectural structure, increasingly deformed spiraling of brown marbled staircases, smudged
finger- and footprints on all floors, soddened and blackened ceilings, alongside thousands of
harrowing fingers and a detection of an insidious and undesirable multiplication of {word
crossed out: white} hands [tbc].
C

Out of the blue, the clock-horse dislocated particles expand in size, circle in all directions like
giant flies around a street lamp, and then in the most predictable fashion, they collide with my
escapist reminiscences multiple times until I lose connection and the landscape above comes
to a [menacing] stillness. [How does it look now? a scarlet-haired question.] I'm blinking, I'm
moving my weight from one leg to the other, before I can attempt a description of the earth
balls that stagnate in the air among translucent tentacles [they're almost gone] and floating
dioramas of miniatures. Proportions have inverted, scraped surfaces have commingled and
my U-shaped. reality. and. vision. are. stammering... I can't find my hands!
...

P.150

P.151

-- Ospal ( talk ) 09:27, 19 November 2015 (CET) Here is where the transcript ENDS,
where the black text lines dribble back into the box. For information on document location or
transcription method, kindly contact the editor.

Last
Revision:
28·06·2016

LES
UTOPISTES
and
their
common
logos/et
leurs
logos
communs
DENNIS POHL
EN

In itself this list is just a bag of words that orders the common terms used in the works of
Le Corbusier and Paul Otlet with the help of text comparison. The quantity of similar words
relates to the word-count of the texts, which means that each appearance has a different
weight. Taken this into account, the appearance of the word esprit for instance, is more
significant in Vers une Architecture (127 times) than in Traité de documentation (240
times), although the total amount of appearances is almost two times higher.
Beyond the mere quantified use of a common language, this list follows the intuition that
there is something more to elaborate in the discourse between these two utopians. One
possible reading can be found in The Smart City, an essay that traces their encounter.
FR

Cette liste n'est en elle même qu'un sac de mots qui organisent les termes les plus
communs utilisés dans les travaux de Le Corbusier et Paul Otlet en utilisant un comparateur
de texte. Le nombre de mots similaires rapotés par le comptage automatique des mots du
texte, ceci signifie que chaque occurence a une valeur différente. Prenons l'exemple des
aparitions du mot esprit par exemple sont plus significatives dans Vers une Architecture (127

P.152

P.153

fois) plutot que dans le Traité de documentation (240 fois), et ceci bien que le nombre
d'occurences est pratiquement 2 fois plus élevé.
Au delà de simplement comptabiliser la pratique d'un langage commun, mais cette liste suit
une intuition qu'il y a quelque chose qui mériterait une recherche plus approfondie sur le
discours de ces deux utopistes. Une proposition pour une telle recherche peut être trouvée
dans La Ville Intelligente, un essai qui retrace leur rencontre.
Books taken into consideration/Livres prise en compte:
• Le Corbusier, Vers une Architecture, Paris: les éditions G. Crès, 1923. Wordcount: 32733.
• Paul Otlet, Traité de documentation: le livre sur le livre, théorie et pratique, Bruxelles:
Mundaneum, Palais Mondial, 1934. Word-count: 356854.
• Le Corbusier, Urbanisme, Paris: les éditions G. Crès, 1925. Word-count: 37699.
• Paul Otlet, Monde: essai d'universalisme - Connaissance du Monde, Sentiment du
Monde, Action organisee et Plan du Monde, Bruxelles: Editiones Mundeum 1935.
Word-count: 140209.
acquis

appears 5 times in Vers une 21 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

11 times in
Monde.

activité

appears 10 times in Vers
une Architecture,

43 times in Traité de
documentation,

10 times in
Urbanisme and

78 times in
Monde.

actuel

appears 9 times in Vers une 27 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

22 times in
Monde.

actuelle

appears 7 times in Vers une 19 times in Traité de
Architecture,
documentation,

8 times in
Urbanisme and

26 times in
Monde.

actuelles

appears 5 times in Vers une 6 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

6 times in
Monde.

affaires

appears 6 times in Vers une 42 times in Traité de
Architecture,
documentation,

30 times in
Urbanisme and

19 times in
Monde.

air

appears 12 times in Vers
une Architecture,

12 times in Traité de
documentation,

14 times in
Urbanisme and

16 times in
Monde.

aise

appears 7 times in Vers une 71 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

12 times in
Monde.

alors

appears 32 times in Vers
une Architecture,

165 times in Traité de 38 times in
documentation,
Urbanisme and

52 times in
Monde.

angle

appears 5 times in Vers une 18 times in Traité de
Architecture,
documentation,

16 times in
Urbanisme and

7 times in
Monde.

années

appears 7 times in Vers une 89 times in Traité de
Architecture,
documentation,

10 times in
Urbanisme and

42 times in
Monde.

ans

appears 17 times in Vers
une Architecture,

91 times in Traité de
documentation,

16 times in
Urbanisme and

109 times in
Monde.

architecture

appears 199 times in Vers
une Architecture,

51 times in Traité de
documentation,

26 times in
Urbanisme and

11 times in
Monde.

art

appears 44 times in Vers
une Architecture,

370 times in Traité de 6 times in
documentation,
Urbanisme and

60 times in
Monde.

aspect

appears 5 times in Vers une 45 times in Traité de
Architecture,
documentation,

8 times in
Urbanisme and

29 times in
Monde.

auto

appears 10 times in Vers
une Architecture,

13 times in Traité de
documentation,

12 times in
Urbanisme and

5 times in
Monde.

autrement

appears 6 times in Vers une 15 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

10 times in
Monde.

avant

appears 8 times in Vers une 131 times in Traité de 6 times in
Architecture,
documentation,
Urbanisme and

45 times in
Monde.

avoir

appears 13 times in Vers
une Architecture,

208 times in Traité de 6 times in
documentation,
Urbanisme and

72 times in
Monde.

base

appears 8 times in Vers une 119 times in Traité de 6 times in
Architecture,
documentation,
Urbanisme and

66 times in
Monde.

beauté

appears 14 times in Vers
une Architecture,

14 times in
Urbanisme and

21 times in
Monde.

beaucoup

appears 9 times in Vers une 114 times in Traité de 8 times in
Architecture,
documentation,
Urbanisme and

23 times in
Monde.

besoin

appears 16 times in Vers
une Architecture,

82 times in Traité de
documentation,

8 times in
Urbanisme and

40 times in
Monde.

calcul

appears 19 times in Vers
une Architecture,

15 times in Traité de
documentation,

24 times in
Urbanisme and

21 times in
Monde.

cause

appears 6 times in Vers une 47 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

26 times in
Monde.

cela

appears 16 times in Vers
une Architecture,

16 times in
Urbanisme and

31 times in
Monde.

cellule

appears 7 times in Vers une 9 times in Traité de
Architecture,
documentation,

10 times in
Urbanisme and

7 times in
Monde.

centre

appears 7 times in Vers une 55 times in Traité de
Architecture,
documentation,

50 times in
Urbanisme and

44 times in
Monde.

P.154

34 times in Traité de
documentation,

99 times in Traité de
documentation,

P.155

chapitre

appears 7 times in Vers une 35 times in Traité de
Architecture,
documentation,

chacun

appears 6 times in Vers une 151 times in Traité de 6 times in
Architecture,
documentation,
Urbanisme and

60 times in
Monde.

chemins

appears 9 times in Vers une 18 times in Traité de
Architecture,
documentation,

12 times in
Urbanisme and

5 times in
Monde.

chemin

appears 7 times in Vers une 19 times in Traité de
Architecture,
documentation,

18 times in
Urbanisme and

9 times in
Monde.

choses

appears 43 times in Vers
une Architecture,

215 times in Traité de 20 times in
documentation,
Urbanisme and

157 times in
Monde.

chose

appears 34 times in Vers
une Architecture,

110 times in Traité de 12 times in
documentation,
Urbanisme and

52 times in
Monde.

ciel

appears 8 times in Vers une 13 times in Traité de
Architecture,
documentation,

48 times in
Urbanisme and

18 times in
Monde.

cinquante

appears 5 times in Vers une 6 times in Traité de
Architecture,
documentation,

8 times in
Urbanisme and

5 times in
Monde.

circulation

appears 6 times in Vers une 27 times in Traité de
Architecture,
documentation,

44 times in
Urbanisme and

8 times in
Monde.

cité

appears 10 times in Vers
une Architecture,

29 times in Traité de
documentation,

34 times in
Urbanisme and

35 times in
Monde.

claire

appears 6 times in Vers une 18 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

6 times in
Monde.

compte

appears 11 times in Vers
une Architecture,

96 times in Traité de
documentation,

8 times in
Urbanisme and

37 times in
Monde.

construction

appears 50 times in Vers
une Architecture,

24 times in Traité de
documentation,

14 times in
Urbanisme and

8 times in
Monde.

conception

appears 23 times in Vers
une Architecture,

62 times in Traité de
documentation,

8 times in
Urbanisme and

64 times in
Monde.

construire

appears 17 times in Vers
une Architecture,

10 times in Traité de
documentation,

6 times in
Urbanisme and

9 times in
Monde.

contre

appears 13 times in Vers
une Architecture,

91 times in Traité de
documentation,

6 times in
Urbanisme and

79 times in
Monde.

conà

appears 9 times in Vers une 49 times in Traité de
Architecture,
documentation,

10 times in
Urbanisme and

20 times in
Monde.

12 times in
Urbanisme and

5 times in
Monde.

constructions appears 7 times in Vers une 8 times in Traité de
Architecture,
documentation,

10 times in
Urbanisme and

9 times in
Monde.

connaissance

appears 5 times in Vers une 76 times in Traité de
Architecture,
documentation,

8 times in
Urbanisme and

56 times in
Monde.

conditions

appears 5 times in Vers une 111 times in Traité de 8 times in
Architecture,
documentation,
Urbanisme and

57 times in
Monde.

cours

appears 8 times in Vers une 150 times in Traité de 8 times in
Architecture,
documentation,
Urbanisme and

65 times in
Monde.

coup

appears 7 times in Vers une 34 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

14 times in
Monde.

crise

appears 11 times in Vers
une Architecture,

8 times in Traité de
documentation,

6 times in
Urbanisme and

45 times in
Monde.

création

appears 22 times in Vers
une Architecture,

82 times in Traité de
documentation,

10 times in
Urbanisme and

48 times in
Monde.

créer

appears 10 times in Vers
une Architecture,

57 times in Traité de
documentation,

10 times in
Urbanisme and

25 times in
Monde.

crée

appears 10 times in Vers
une Architecture,

26 times in Traité de
documentation,

6 times in
Urbanisme and

18 times in
Monde.

culture

appears 7 times in Vers une 33 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

68 times in
Monde.

demain

appears 7 times in Vers une 17 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

11 times in
Monde.

dessus

appears 6 times in Vers une 28 times in Traité de
Architecture,
documentation,

16 times in
Urbanisme and

21 times in
Monde.

devant

appears 18 times in Vers
une Architecture,

75 times in Traité de
documentation,

12 times in
Urbanisme and

43 times in
Monde.

dire

appears 17 times in Vers
une Architecture,

185 times in Traité de 16 times in
documentation,
Urbanisme and

72 times in
Monde.

disposition

appears 5 times in Vers une 83 times in Traité de
Architecture,
documentation,

doit

appears 13 times in Vers
une Architecture,

domaines

appears 5 times in Vers une 42 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

38 times in
Monde.

donne

appears 8 times in Vers une 148 times in Traité de 12 times in
Architecture,
documentation,
Urbanisme and

44 times in
Monde.

droite

appears 11 times in Vers
une Architecture,

8 times in
Monde.

P.156

6 times in
Urbanisme and

408 times in Traité de 14 times in
documentation,
Urbanisme and

40 times in Traité de
documentation,

36 times in
Urbanisme and

8 times in
Monde.
134 times in
Monde.

P.157

droits

appears 8 times in Vers une 22 times in Traité de
Architecture,
documentation,

droit

appears 6 times in Vers une 106 times in Traité de 36 times in
Architecture,
documentation,
Urbanisme and

125 times in
Monde.

désordre

appears 7 times in Vers une 9 times in Traité de
Architecture,
documentation,

12 times in
Urbanisme and

12 times in
Monde.

effet

appears 7 times in Vers une 78 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

32 times in
Monde.

encore

appears 25 times in Vers
une Architecture,

enfin

appears 5 times in Vers une 46 times in Traité de
Architecture,
documentation,

ensemble

appears 16 times in Vers
une Architecture,

329 times in Traité de 14 times in
documentation,
Urbanisme and

123 times in
Monde.

entre

appears 29 times in Vers
une Architecture,

342 times in Traité de 18 times in
documentation,
Urbanisme and

246 times in
Monde.

esprit

appears 127 times in Vers
une Architecture,

240 times in Traité de 36 times in
documentation,
Urbanisme and

150 times in
Monde.

espace

appears 20 times in Vers
une Architecture,

69 times in Traité de
documentation,

16 times in
Urbanisme and

122 times in
Monde.

esprits

appears 6 times in Vers une 44 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

35 times in
Monde.

exemple

appears 5 times in Vers une 143 times in Traité de 12 times in
Architecture,
documentation,
Urbanisme and

30 times in
Monde.

existence

appears 5 times in Vers une 73 times in Traité de
Architecture,
documentation,

10 times in
Urbanisme and

75 times in
Monde.

face

appears 15 times in Vers
une Architecture,

11 times in Traité de
documentation,

12 times in
Urbanisme and

18 times in
Monde.

faire

appears 51 times in Vers
une Architecture,

410 times in Traité de 24 times in
documentation,
Urbanisme and

faites

appears 7 times in Vers une 45 times in Traité de
Architecture,
documentation,

faut

appears 46 times in Vers
une Architecture,

285 times in Traité de 54 times in
documentation,
Urbanisme and

126 times in
Monde.

fer

appears 12 times in Vers
une Architecture,

30 times in Traité de
documentation,

14 times in
Monde.

16 times in
Urbanisme and

197 times in Traité de 22 times in
documentation,
Urbanisme and
8 times in
Urbanisme and

6 times in
Urbanisme and

14 times in
Urbanisme and

37 times in
Monde.

106 times in
Monde.
29 times in
Monde.

137 times in
Monde.
12 times in
Monde.

fin

appears 5 times in Vers une 122 times in Traité de 6 times in
Architecture,
documentation,
Urbanisme and

66 times in
Monde.

fois

appears 11 times in Vers
une Architecture,

208 times in Traité de 8 times in
documentation,
Urbanisme and

77 times in
Monde.

font

appears 24 times in Vers
une Architecture,

93 times in Traité de
documentation,

10 times in
Urbanisme and

25 times in
Monde.

fond

appears 5 times in Vers une 67 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

29 times in
Monde.

forme

appears 14 times in Vers
une Architecture,

france

appears 6 times in Vers une 190 times in Traité de 6 times in
Architecture,
documentation,
Urbanisme and

57 times in
Monde.

grande

appears 40 times in Vers
une Architecture,

202 times in Traité de 82 times in
documentation,
Urbanisme and

69 times in
Monde.

grand

appears 34 times in Vers
une Architecture,

276 times in Traité de 34 times in
documentation,
Urbanisme and

89 times in
Monde.

grands

appears 24 times in Vers
une Architecture,

187 times in Traité de 24 times in
documentation,
Urbanisme and

88 times in
Monde.

grandes

appears 21 times in Vers
une Architecture,

182 times in Traité de 36 times in
documentation,
Urbanisme and

93 times in
Monde.

grandeur

appears 11 times in Vers
une Architecture,

34 times in Traité de
documentation,

6 times in
Urbanisme and

19 times in
Monde.

gros

appears 5 times in Vers une 25 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

8 times in
Monde.

guerre

appears 5 times in Vers une 115 times in Traité de 8 times in
Architecture,
documentation,
Urbanisme and

137 times in
Monde.

géométrie

appears 17 times in Vers
une Architecture,

14 times in Traité de
documentation,

24 times in
Urbanisme and

12 times in
Monde.

hauteur

appears 14 times in Vers
une Architecture,

21 times in Traité de
documentation,

10 times in
Urbanisme and

8 times in
Monde.

haute

appears 9 times in Vers une 34 times in Traité de
Architecture,
documentation,

8 times in
Urbanisme and

13 times in
Monde.

haut

appears 9 times in Vers une 71 times in Traité de
Architecture,
documentation,

18 times in
Urbanisme and

24 times in
Monde.

heures

appears 15 times in Vers
une Architecture,

20 times in
Urbanisme and

16 times in
Monde.

P.158

442 times in Traité de 18 times in
documentation,
Urbanisme and

45 times in Traité de
documentation,

106 times in
Monde.

P.159

heure

appears 15 times in Vers
une Architecture,

histoire

appears 6 times in Vers une 338 times in Traité de 10 times in
Architecture,
documentation,
Urbanisme and

183 times in
Monde.

homme

appears 74 times in Vers
une Architecture,

189 times in Traité de 66 times in
documentation,
Urbanisme and

315 times in
Monde.

hommes

appears 11 times in Vers
une Architecture,

122 times in Traité de 30 times in
documentation,
Urbanisme and

144 times in
Monde.

hors

appears 9 times in Vers une 36 times in Traité de
Architecture,
documentation,

10 times in
Urbanisme and

12 times in
Monde.

humaine

appears 19 times in Vers
une Architecture,

72 times in Traité de
documentation,

14 times in
Urbanisme and

96 times in
Monde.

humain

appears 10 times in Vers
une Architecture,

45 times in Traité de
documentation,

16 times in
Urbanisme and

61 times in
Monde.

idées

appears 14 times in Vers
une Architecture,

283 times in Traité de 6 times in
documentation,
Urbanisme and

80 times in
Monde.

idée

appears 13 times in Vers
une Architecture,

168 times in Traité de 6 times in
documentation,
Urbanisme and

75 times in
Monde.

immenses

appears 11 times in Vers
une Architecture,

22 times in Traité de
documentation,

8 times in
Urbanisme and

12 times in
Monde.

immense

appears 8 times in Vers une 62 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

25 times in
Monde.

industrielle

appears 12 times in Vers
une Architecture,

6 times in
Urbanisme and

14 times in
Monde.

industriels

appears 5 times in Vers une 18 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

9 times in
Monde.

jeu

appears 14 times in Vers
une Architecture,

39 times in Traité de
documentation,

6 times in
Urbanisme and

29 times in
Monde.

jour

appears 13 times in Vers
une Architecture,

216 times in Traité de 22 times in
documentation,
Urbanisme and

69 times in
Monde.

lequel

appears 5 times in Vers une 67 times in Traité de
Architecture,
documentation,

10 times in
Urbanisme and

19 times in
Monde.

libre

appears 7 times in Vers une 48 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

45 times in
Monde.

lieu

appears 10 times in Vers
une Architecture,

384 times in Traité de 6 times in
documentation,
Urbanisme and

89 times in
Monde.

58 times in Traité de
documentation,

7 times in Traité de
documentation,

32 times in
Urbanisme and

28 times in
Monde.

logique

appears 14 times in Vers
une Architecture,

117 times in Traité de 8 times in
documentation,
Urbanisme and

39 times in
Monde.

loin

appears 11 times in Vers
une Architecture,

46 times in Traité de
documentation,

34 times in
Urbanisme and

17 times in
Monde.

louis

appears 11 times in Vers
une Architecture,

33 times in Traité de
documentation,

6 times in
Urbanisme and

10 times in
Monde.

lumière

appears 45 times in Vers
une Architecture,

77 times in Traité de
documentation,

10 times in
Urbanisme and

38 times in
Monde.

machine

appears 17 times in Vers
une Architecture,

119 times in Traité de 20 times in
documentation,
Urbanisme and

29 times in
Monde.

machines

appears 12 times in Vers
une Architecture,

83 times in Traité de
documentation,

10 times in
Urbanisme and

29 times in
Monde.

main

appears 8 times in Vers une 96 times in Traité de
Architecture,
documentation,

10 times in
Urbanisme and

15 times in
Monde.

mal

appears 15 times in Vers
une Architecture,

33 times in Traité de
documentation,

8 times in
Urbanisme and

26 times in
Monde.

masse

appears 6 times in Vers une 35 times in Traité de
Architecture,
documentation,

8 times in
Urbanisme and

52 times in
Monde.

masses

appears 5 times in Vers une 21 times in Traité de
Architecture,
documentation,

12 times in
Urbanisme and

19 times in
Monde.

mesure

appears 20 times in Vers
une Architecture,

110 times in Traité de 16 times in
documentation,
Urbanisme and

46 times in
Monde.

milieu

appears 7 times in Vers une 58 times in Traité de
Architecture,
documentation,

20 times in
Urbanisme and

56 times in
Monde.

moderne

appears 31 times in Vers
une Architecture,

79 times in Traité de
documentation,

20 times in
Urbanisme and

35 times in
Monde.

moins

appears 16 times in Vers
une Architecture,

243 times in Traité de 10 times in
documentation,
Urbanisme and

93 times in
Monde.

moment

appears 11 times in Vers
une Architecture,

105 times in Traité de 18 times in
documentation,
Urbanisme and

36 times in
Monde.

monde

appears 18 times in Vers
une Architecture,

177 times in Traité de 26 times in
documentation,
Urbanisme and

331 times in
Monde.

montre

appears 10 times in Vers
une Architecture,

27 times in Traité de
documentation,

6 times in
Urbanisme and

11 times in
Monde.

morale

appears 6 times in Vers une 32 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

35 times in
Monde.

P.160

P.161

moyens

appears 16 times in Vers
une Architecture,

125 times in Traité de 20 times in
documentation,
Urbanisme and

59 times in
Monde.

moyen

appears 5 times in Vers une 268 times in Traité de 8 times in
Architecture,
documentation,
Urbanisme and

97 times in
Monde.

mécanique

appears 12 times in Vers
une Architecture,

50 times in Traité de
documentation,

31 times in
Monde.

nature

appears 18 times in Vers
une Architecture,

120 times in Traité de 20 times in
documentation,
Urbanisme and

166 times in
Monde.

nouveau

appears 39 times in Vers
une Architecture,

98 times in Traité de
documentation,

16 times in
Urbanisme and

43 times in
Monde.

nouvelle

appears 13 times in Vers
une Architecture,

129 times in Traité de 6 times in
documentation,
Urbanisme and

60 times in
Monde.

nouvelles

appears 6 times in Vers une 180 times in Traité de 6 times in
Architecture,
documentation,
Urbanisme and

65 times in
Monde.

nécessaire

appears 11 times in Vers
une Architecture,

80 times in Traité de
documentation,

12 times in
Urbanisme and

43 times in
Monde.

or

appears 10 times in Vers
une Architecture,

63 times in Traité de
documentation,

14 times in
Urbanisme and

45 times in
Monde.

ordre

appears 59 times in Vers
une Architecture,

421 times in Traité de 30 times in
documentation,
Urbanisme and

organes

appears 5 times in Vers une 74 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

21 times in
Monde.

outil

appears 19 times in Vers
une Architecture,

12 times in Traité de
documentation,

6 times in
Urbanisme and

5 times in
Monde.

outillage

appears 11 times in Vers
une Architecture,

28 times in Traité de
documentation,

14 times in
Urbanisme and

6 times in
Monde.

paris

appears 20 times in Vers
une Architecture,

192 times in Traité de 60 times in
documentation,
Urbanisme and

16 times in
Monde.

part

appears 13 times in Vers
une Architecture,

214 times in Traité de 14 times in
documentation,
Urbanisme and

77 times in
Monde.

partie

appears 11 times in Vers
une Architecture,

222 times in Traité de 10 times in
documentation,
Urbanisme and

58 times in
Monde.

partout

appears 8 times in Vers une 48 times in Traité de
Architecture,
documentation,

12 times in
Urbanisme and

28 times in
Monde.

passé

appears 17 times in Vers
une Architecture,

12 times in
Urbanisme and

49 times in
Monde.

55 times in Traité de
documentation,

16 times in
Urbanisme and

128 times in
Monde.

passion

appears 8 times in Vers une 6 times in Traité de
Architecture,
documentation,

pensée

appears 10 times in Vers
une Architecture,

291 times in Traité de 12 times in
documentation,
Urbanisme and

127 times in
Monde.

perfection

appears 12 times in Vers
une Architecture,

14 times in Traité de
documentation,

10 times in
Urbanisme and

7 times in
Monde.

petit

appears 11 times in Vers
une Architecture,

88 times in Traité de
documentation,

14 times in
Urbanisme and

23 times in
Monde.

petite

appears 7 times in Vers une 28 times in Traité de
Architecture,
documentation,

10 times in
Urbanisme and

18 times in
Monde.

petites

appears 5 times in Vers une 25 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

12 times in
Monde.

peuvent

appears 13 times in Vers
une Architecture,

198 times in Traité de 12 times in
documentation,
Urbanisme and

45 times in
Monde.

pied

appears 13 times in Vers
une Architecture,

12 times in Traité de
documentation,

8 times in
Monde.

plan

appears 86 times in Vers
une Architecture,

151 times in Traité de 32 times in
documentation,
Urbanisme and

174 times in
Monde.

place

appears 32 times in Vers
une Architecture,

208 times in Traité de 14 times in
documentation,
Urbanisme and

62 times in
Monde.

plans

appears 15 times in Vers
une Architecture,

60 times in Traité de
documentation,

12 times in
Urbanisme and

27 times in
Monde.

pleine

appears 6 times in Vers une 12 times in Traité de
Architecture,
documentation,

10 times in
Urbanisme and

6 times in
Monde.

point

appears 18 times in Vers
une Architecture,

278 times in Traité de 16 times in
documentation,
Urbanisme and

133 times in
Monde.

pourrait

appears 10 times in Vers
une Architecture,

93 times in Traité de
documentation,

12 times in
Urbanisme and

32 times in
Monde.

poésie

appears 5 times in Vers une 83 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

7 times in
Monde.

pratique

appears 15 times in Vers
une Architecture,

98 times in Traité de
documentation,

6 times in
Urbanisme and

28 times in
Monde.

pratiques

appears 5 times in Vers une 44 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

11 times in
Monde.

première

appears 11 times in Vers
une Architecture,

133 times in Traité de 8 times in
documentation,
Urbanisme and

38 times in
Monde.

P.162

58 times in
Urbanisme and

22 times in
Urbanisme and

14 times in
Monde.

P.163

prix

appears 7 times in Vers une 133 times in Traité de 8 times in
Architecture,
documentation,
Urbanisme and

35 times in
Monde.

principes

appears 5 times in Vers une 132 times in Traité de 12 times in
Architecture,
documentation,
Urbanisme and

53 times in
Monde.

problème

appears 53 times in Vers
une Architecture,

92 times in Traité de
documentation,

28 times in
Urbanisme and

88 times in
Monde.

programme

appears 14 times in Vers
une Architecture,

24 times in Traité de
documentation,

6 times in
Urbanisme and

12 times in
Monde.

produit

appears 13 times in Vers
une Architecture,

81 times in Traité de
documentation,

24 times in
Urbanisme and

38 times in
Monde.

progrès

appears 9 times in Vers une 133 times in Traité de 14 times in
Architecture,
documentation,
Urbanisme and

73 times in
Monde.

puis

appears 10 times in Vers
une Architecture,

115 times in Traité de 6 times in
documentation,
Urbanisme and

48 times in
Monde.

quatre

appears 11 times in Vers
une Architecture,

114 times in Traité de 12 times in
documentation,
Urbanisme and

40 times in
Monde.

qualité

appears 6 times in Vers une 39 times in Traité de
Architecture,
documentation,

quelque

appears 14 times in Vers
une Architecture,

132 times in Traité de 6 times in
documentation,
Urbanisme and

64 times in
Monde.

quelques

appears 12 times in Vers
une Architecture,

167 times in Traité de 10 times in
documentation,
Urbanisme and

33 times in
Monde.

raison

appears 6 times in Vers une 112 times in Traité de 38 times in
Architecture,
documentation,
Urbanisme and

77 times in
Monde.

rapport

appears 6 times in Vers une 106 times in Traité de 6 times in
Architecture,
documentation,
Urbanisme and

33 times in
Monde.

rapide

appears 5 times in Vers une 53 times in Traité de
Architecture,
documentation,

8 times in
Urbanisme and

16 times in
Monde.

règle

appears 5 times in Vers une 22 times in Traité de
Architecture,
documentation,

10 times in
Urbanisme and

5 times in
Monde.

résoudre

appears 5 times in Vers une 18 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

8 times in
Monde.

sens

appears 31 times in Vers
une Architecture,

176 times in Traité de 14 times in
documentation,
Urbanisme and

64 times in
Monde.

sentiment

appears 10 times in Vers
une Architecture,

33 times in Traité de
documentation,

69 times in
Monde.

6 times in
Urbanisme and

14 times in
Urbanisme and

8 times in
Monde.

services

appears 5 times in Vers une 107 times in Traité de 20 times in
Architecture,
documentation,
Urbanisme and

24 times in
Monde.

seule

appears 7 times in Vers une 93 times in Traité de
Architecture,
documentation,

8 times in
Urbanisme and

43 times in
Monde.

siècle

appears 6 times in Vers une 283 times in Traité de 20 times in
Architecture,
documentation,
Urbanisme and

93 times in
Monde.

sol

appears 28 times in Vers
une Architecture,

10 times in Traité de
documentation,

20 times in
Urbanisme and

24 times in
Monde.

solution

appears 8 times in Vers une 26 times in Traité de
Architecture,
documentation,

8 times in
Urbanisme and

25 times in
Monde.

solutions

appears 6 times in Vers une 10 times in Traité de
Architecture,
documentation,

16 times in
Urbanisme and

10 times in
Monde.

souvent

appears 7 times in Vers une 207 times in Traité de 10 times in
Architecture,
documentation,
Urbanisme and

30 times in
Monde.

suivant

appears 12 times in Vers
une Architecture,

102 times in Traité de 16 times in
documentation,
Urbanisme and

30 times in
Monde.

surface

appears 25 times in Vers
une Architecture,

51 times in Traité de
documentation,

19 times in
Monde.

système

appears 10 times in Vers
une Architecture,

256 times in Traité de 32 times in
documentation,
Urbanisme and

129 times in
Monde.

série

appears 56 times in Vers
une Architecture,

98 times in Traité de
documentation,

8 times in
Urbanisme and

24 times in
Monde.

sécurité

appears 5 times in Vers une 5 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

9 times in
Monde.

table

appears 7 times in Vers une 113 times in Traité de 6 times in
Architecture,
documentation,
Urbanisme and

9 times in
Monde.

tableau

appears 5 times in Vers une 106 times in Traité de 8 times in
Architecture,
documentation,
Urbanisme and

24 times in
Monde.

technique

appears 6 times in Vers une 153 times in Traité de 8 times in
Architecture,
documentation,
Urbanisme and

60 times in
Monde.

tel

appears 11 times in Vers
une Architecture,

114 times in Traité de 10 times in
documentation,
Urbanisme and

32 times in
Monde.

telle

appears 10 times in Vers
une Architecture,

105 times in Traité de 8 times in
documentation,
Urbanisme and

28 times in
Monde.

tels

appears 6 times in Vers une 47 times in Traité de
Architecture,
documentation,

P.164

16 times in
Urbanisme and

8 times in
Urbanisme and

16 times in
Monde.

P.165

temps

appears 24 times in Vers
une Architecture,

terrain

appears 7 times in Vers une 11 times in Traité de
Architecture,
documentation,

toutes

appears 32 times in Vers
une Architecture,

591 times in Traité de 14 times in
documentation,
Urbanisme and

259 times in
Monde.

toujours

appears 22 times in Vers
une Architecture,

147 times in Traité de 20 times in
documentation,
Urbanisme and

65 times in
Monde.

tour

appears 5 times in Vers une 71 times in Traité de
Architecture,
documentation,

travail

appears 27 times in Vers
une Architecture,

travers

appears 7 times in Vers une 58 times in Traité de
Architecture,
documentation,

18 times in
Urbanisme and

40 times in
Monde.

trop

appears 15 times in Vers
une Architecture,

93 times in Traité de
documentation,

16 times in
Urbanisme and

28 times in
Monde.

trouve

appears 9 times in Vers une 93 times in Traité de
Architecture,
documentation,

10 times in
Urbanisme and

32 times in
Monde.

très

appears 18 times in Vers
une Architecture,

209 times in Traité de 16 times in
documentation,
Urbanisme and

47 times in
Monde.

univers

appears 15 times in Vers
une Architecture,

27 times in Traité de
documentation,

8 times in
Urbanisme and

68 times in
Monde.

unique

appears 8 times in Vers une 60 times in Traité de
Architecture,
documentation,

10 times in
Urbanisme and

23 times in
Monde.

usines

appears 13 times in Vers
une Architecture,

6 times in
Urbanisme and

6 times in
Monde.

vastes

appears 6 times in Vers une 14 times in Traité de
Architecture,
documentation,

12 times in
Urbanisme and

14 times in
Monde.

vers

appears 15 times in Vers
une Architecture,

156 times in Traité de 28 times in
documentation,
Urbanisme and

100 times in
Monde.

vie

appears 21 times in Vers
une Architecture,

249 times in Traité de 26 times in
documentation,
Urbanisme and

329 times in
Monde.

ville

appears 38 times in Vers
une Architecture,

30 times in Traité de
documentation,

122 times in
Urbanisme and

11 times in
Monde.

villes

appears 33 times in Vers
une Architecture,

34 times in Traité de
documentation,

52 times in
Urbanisme and

38 times in
Monde.

436 times in Traité de 22 times in
documentation,
Urbanisme and
16 times in
Urbanisme and

6 times in
Urbanisme and

403 times in Traité de 50 times in
documentation,
Urbanisme and

9 times in Traité de
documentation,

239 times in
Monde.
6 times in
Monde.

25 times in
Monde.
177 times in
Monde.

voir

appears 19 times in Vers
une Architecture,

252 times in Traité de 14 times in
documentation,
Urbanisme and

48 times in
Monde.

voit

appears 14 times in Vers
une Architecture,

50 times in Traité de
documentation,

28 times in
Urbanisme and

27 times in
Monde.

voilà

appears 13 times in Vers
une Architecture,

13 times in Traité de
documentation,

20 times in
Urbanisme and

23 times in
Monde.

volonté

appears 7 times in Vers une 39 times in Traité de
Architecture,
documentation,

8 times in
Urbanisme and

46 times in
Monde.

vue

appears 18 times in Vers
une Architecture,

272 times in Traité de 6 times in
documentation,
Urbanisme and

105 times in
Monde.

yeux

appears 41 times in Vers
une Architecture,

76 times in Traité de
documentation,

8 times in
Monde.

6 times in
Urbanisme and

Last
Revision:
3·08·2016

P.166

P.167

X=Y
DICK RECKARD

0. INNOVATION OF THE SAME

Last
Revision:
2·08·2016

The PR imagery produced by and around the
Mundaneum (disambiguation: the institution in
Mons) often suggests, through a series of
'samenesses', an essential continuity between
Otlet's endeavour and Internet-related products
and services, in particular Google's. A good
example is a scene from the video "From
industrial heartland to the Internet age",
published by The Mundaneum, 2014 , where the drawers of Mundaneum
(disambiguation: Otlet's Utopia) morph into the servers of one of Google's
data centres.
This approach is not limited to images: a recurring discourse that shapes some of the
exhibitions taking place in the Mundaneum maintains that the dream of the Belgian utopian
has been kept alive in the development of internetworked communications, and currently
finds its spitiual successor in the products and services of Google. Even though there are
many connections and similarities between the two endeavours, one has to acknowledge that
Otlet was an internationalist, a socialist, an utopian, that his projects were not profit oriented,
and most importantly, that he was living in the temporal and cultural context of modernism at
the beginning of the 20th century. The constructed identities and continuities that detach
Otlet and the Mundaneum from a specific historical frame, ignore the different scientific,
social and political milieus involved. It means that these narratives exclude the discording or
disturbing elements that are inevitable when considering such a complex figure in its entirety.
This is not surprising, seeing the parties that are involved in the discourse: these types of
instrumental identities and differences suit the rhetorical tone of Silicon Valley. Newly
launched IT products for example, are often described as groundbreaking, innovative and
different from anything seen before. In other situations, those products could be advertised
exactly the same, as something else that already exists[1]. While novelty and difference
surprise and amaze, sameness reassures and comforts. For example, Google Glass was
marketed as revolutionary and innovative, but when it was attacked for its blatant privacy

issues, some defended it as just a camera and a phone joined together. The samenessdifference duo fulfils a clear function: on the one hand, it suggests that technological
advancements might alter the way we live dramatically, and we should be ready to give up
our old-fashioned ideas about life and culture for the sake of innovation. On the other hand, it
proposes we should not be worried about change, and that society has always evolved
through disruptions, undoubtedly for the better. For each questionable groundbreaking new
invention, there is a previous one with the same ideal, potentially with just as many critics...
Great minds think alike, after all. This sort of a-historical attitude pervades techno-capitalist
milieus, creating a cartoonesque view of the past, punctuated by great men and great
inventions, a sort of technological variant of Carlyle's Great Man Theory. In this view, the
Internet becomes the invention of a few father/genius figures, rather than the result of a long
and complex interaction of diverging efforts and interests of academics, entrepreneurs and
national governments. This instrumental reading of the past is largely consistent with the
theoretical ground on which the Californian Ideology[2] is based, in which the conception of
history is pervaded by various strains of technological determinism (from Marshall McLuhan
to Alvin Toffler[3]) and capitalist individualism (in generic neoliberal terms, up to the fervent
objectivism of Ayn Rand).
The appropriation of Paul Otlet's figure as Google's grandfather is a historical simplification,
and the samenesses in this tale are not without fundament. Many concepts and ideals of
documentation theories have reappeared in cybernetics and information theory, and are
therefore present in the narrative of many IT corporations, as in Mountain View's case. With
the intention of restoring a historical complexity, it might be more interesting to play the
exactly the same game ourselves, rather than try to dispel the advertised continuum of the
Google on paper. Choosing to focus on other types of analogies in the story, we can maybe
contribute a narrative that is more respectful to the complexity of the past, and more telling
about the problems of the present.
What followings are three such comparisons, which focus on three aspects of continuity
between the documentation theories and archival experiments Otlet was involved in, and the
cybernetic theories and practices that Google's capitalist enterprise is an exponent of. The
First one takes a look at the conditions of workers in information infrastructures, who are
fundamental for these systems to work but often forgotten or displaced. Next, an account of
the elements of distribution and control that appear both in the idea of a Reseau Mundaneum
, and in the contemporary functioning of data centres, and the resulting interaction with other
types of infrastructures. Finally, there is a brief analysis of the two approaches to the
'organization of world's knowledge', which examines their regimes of truth and the issues that

P.168

P.169

come with them. Hopefully these three short pieces can provide some additional ingredients
for adulterating the sterile recipe of the Google-Otlet sameness.
A. DO ANDROIDS DREAM OF MECHANICAL TURKS?

In a drawing titled Laboratorium Mundaneum, Paul
Otlet depicted his project as a massive factory, processing
books and other documents into end products, rolled out
by a UDC locomotive. In fact, just like a factory,
Mundaneum was dependent on the bureaucratic and
logistic modes of organization of labour developed for
industrial production. Looking at it and at other written
and drawn sketches, one might ask: who made up the
workforce of these factories?
In his Traité de Documentation, Otlet describes
extensively the thinking machines and tasks of intellectual
work into which the Fordist chain of documentation is
broken down. In the subsection dedicated to the people
who would undertake the work though, the only role
described at length is the Bibliotécaire. In a long chapter
that explains what education the librarian should follow, which characteristics are required,
and so on, he briefly mentions the existence of “Bibliotecaire-adjoints, rédacteurs, copistes,
gens de service”[4]. There seems to be no further description nor depiction of the staff that
would write, distribute and search the millions of index cards in order to keep the archive
running, an impossible task for the Bibliotécaire alone.
A photograph from around 1930, taken in the Palais
Mondial, where we see Paul Otlet together with the rest
of the équipe, gives us a better answer. In this beautiful
group picture, we notice that the workforce that kept the
archival machine running was made up of women, but we
do not know much about them. As in telephone switching
systems or early software development[5], gender
stereotypes and discrimination led to the appointment of
female workers for repetitive tasks that required specific
knowledge and precision. According to the ideal image described in "Traité", all the tasks of
collecting, translating, distributing, should be completely
automatic, seemingly without the necessity of human
intervention. However, the Mundaneum hired dozens of
women to perform these tasks. This human-run version of RC : Il faut déjà au minimum avoir
the system was not considered worth mentioning, as if it
was a temporary in-between phase that should be
overcome as soon as possible, something that was staining the project with its vulgarity.
Notwithstanding the incredible advancement of information technologies and the automation
of innumerable tasks in collectiong, processing and distributing information, we can observe
the same pattern today. All automatic repetitive tasks that technology should be able to do for
us are still, one way or another, relying on human labour. And unlike the industrial worker
who obtained recognition through political movements and struggles, the role of many
cognitive workers is still hidden or under-represented. Computational linguistics, neural
networks, optical character recognition, all amazing machinic operations are still based on
humans performing huge amounts of repetitive intellectual tasks from which software can
learn, or which software can't do with the same efficiency. Automation didn't really free us
from labour, it just shifted the where, when and who of labour.[6]. Mechanical turks, content
verifiers, annotators of all kinds... The software we use requires a multitude of tasks which
are invisible to us, but are still accomplished by humans. Who are they? When possible,
work is outsourced to foreign English-speaking countries with lower wages, like India. In the
western world it follows the usual pattern: female, lower income, ethnic minorities.
An interesting case of heteromated labour are the socalled Scanops[7], a set of Google workers who have a
different type of badge and are isolated in a section of the
Mountain View complex secluded from the rest of the
workers through strict access permissions and fixed time
schedules. Their work consists of scanning the pages of
printed books for the Google Books database, a task that
is still more convenient to do by hand (especially in the
case of rare or fragile books). The workers are mostly
women and ethnic minorities, and there is no mention of
them on the Google Books website or elsewhere; in fact
the whole scanning process is kept secret. Even though
the secrecy that surrounds this type of labour can be
justified by the need to protect trade secrets, it again
conceals the human element in machine work. This is
even more obvious when compared to other types of
human workers in the project, such as designers and
programmers, who are celebrated for their creativity and
ingenuity.
However, here and there, evidence of the workforce shows up in the result of their labour.
Photos of Google Books employee's hands sometimes mistakenly end up in the digital
version of the book online[8].
Whether the tendency to hide the human presence is due to the unfulfilled wish for total
automation, to avoid the bad publicity of low wages and precarious work, or to keep an aura
of mystery around machines, remains unclear, both in the case of Google Books and the

P.170

P.171

Palais Mondial. Still, it is reassuring to know that the products hold traces of the work, that
even with the progressive removal of human signs in
automated processes, the workers' presence never
disappears completely. This presence is proof of the
materiality of information production, and becomes a sign
in a webpage or the OCR scanned
pages of a book, reflect a negligence
to the processes and labor of writing,
editing, design, layout, typesetting, and
eventually publishing, collecting and
[9]
cataloging .

In 2013, while Prime Minister Di Rupo was celebrating the beginning of the second phase
of constructing the Saint Ghislain data centre, a few hundred kilometres away a very similar
situation started to unroll. In the municipality of Eemsmond, in the Dutch province of
Groningen, the local Groningen Sea Ports and NOM development were rumoured to have
plans with another code named company, Saturn, to build a data centre in the small port of
Eemshaven.
A few months later, when it was revealed that Google
was behind Saturn, Harm Post, director of Groningen
Sea Ports, commented: "Ten years ago Eemshaven
became the laughing stock of ports and industrial
development in the Netherlands, a planning failure of the
previous century. And now Google is building a very
large data centre here, which is 'pure advertisement' for
Eemshaven and the data port."[10] Further details on tax
cuts were not disclosed and once finished, the data centre will provide at most 150 jobs in
the region.
Yet another territory fortunately chosen by Google, just like Mons, but what are the selection
criteria? For one thing, data centres need to interact with existing infrastructures and flows of
various type. Technically speaking, there are three prerequisites: being near a substantial
source of electrical power (the finished installation will consume twice as much as the whole
city of Groningen); being near a source of clean water, for the massive cooling demands;
being near Internet infrastructure that can assure adequate connectivity. There is also a
whole set of non-technical elements, that we can sum up as the social, economical and
political climate, which proved favourable both in Mons and Eemshaven.
The push behind constructing new sites in new locations, rather expanding existing ones, is
partly due to the rapid growth of the importance of Software as a service, so-called cloud
computing, which is the rental of computational power from a central provider. With the rise
of the SaaS paradigm the geographical and topological placement of data centres becomes of
strategic importance to achieve lower latencies and more stable service. For this reason,

Google has in the last 10 years been pursuing a policy of end-to-end connection between its
facilities and user interfaces. This includes buying leftover fibre networks[11], entering the
business of underwater sea cables[12] and building new data centres, including the ones in
Mons and Eemshaven.
The spread of data centres around the world, along the main network cables across
continents, represents a new phase in the diagram of the Internet. This should not be
confused with the idea of decentralization that was a cornerstone value in the early stages of
interconnected networks.[13] During the rapid development of the Internet and the Web, the
new tenets of immediacy, unlimited storage and exponential growth led to the centralization
of content in increasingly large server farms. Paradoxically, it is now the growing
centralization of all kind of operations in specific buildings, that is fostering their distribution.
The tension between centralization and distribution and the dependence on neighbouring
infrastructures as the electrical grid is not an exclusive feature of contemporary data storage
and networking models. Again, similarities emerge from the history of the Mundaneum,
illustrating how these issues relate closely to the logistic organization of production first
implemented during the industrial revolution, and theorized within modernism.
Centralization was seen by Otlet as the most efficient way to organize content, especially in
view of international exchange[14] which already caused problems related to space back then:
the Mundaneum archive counted 16 million entries at its peak, occupying around 150
rooms. The cumbersome footprint, and the growing difficulty to find stable locations for it,
concurred to the conviction that the project should be included in the plans of new modernist
cities. In the beginning of the 1930s, when the Mundaneum started to lose the support of the
Belgian government, Otlet thought of a new site for it as part of a proposed Cité Mondiale,
which he tried in different locations with different approaches.
Between various attempts, he participated in the competition for the development of the Left
Bank in Antwerp. The most famous modernist urbanists of the time were invited to plan the
development from scratch. At the time, the left bank was completely vacant. Otlet lobbied for
the insertion of a Mundaneum in the plans, stressing how it would create hundreds of jobs for
the region. He also flattered the Flemish pride by insisting on how people from Antwerp
were more hard working than the ones from Brussels, and how they would finally obtain their
deserved recognition, when their city would be elevated to World City status.[15] He partly
succeeded in his propaganda; aside from his own proposal, developed in collaboration with
Le Corbusier, many other participants included Otlet's Mundaneum as a key facility in their
plans. In these proposals, Otlet's archival infrastructure was shown in interaction with the
existing city flows such as industrial docks, factories, the
railway and the newly constructed stock market.[16]The
modernist utopia of a planned living environment implied
that methods similar to those employed for managing the
flows of coal and electricity could be used for the
organization of culture and knowledge.

P.172
From From Paper Mill to Google
Data Center:
In a sense, data centers are similar to
the capitalist factory system; but

P.173

The Traité de Documentation, published in 1934, includes an extended reflection on a
Universal Network of Documentation, that would coordinate the transfer of knowledge
between different documentation centres such as libraries or the Mundaneum[17]. In fact the
existing Mundaneum would simply be the first node of a wide network bound to expand to
the rest of the world, the Reseau Mundaneum. The nodes of this network are explicitly
described in relation to "post, railways and the press, those three essential organs of modern
life which function unremittingly in order to unite men, cities and nations."[18] In the same
period, in letter exchanges with Patrick Geddes and Otto Neurath, commenting on the
potential of heliographies as a way to distribute knowledge, the three imagine the White Link
, a network to distribute copies throughout a series of Mundaneum nodes[19]. As a result, the
same piece of information would be serially produced and logistically distributed, described
as a sort of moving Mundaneum idea, facilitated by the railway system[20]. No wonder that
future Mundaneums were foreseen to be built next to a train station.
In Otlet's plans for a Reseau Mundaneum we can already detect some of the key
transformations that reappear in today's data centre scenario. First of all, a drive for
centralization, with the accumulation of materials that led to the monumental plans of World
Cities. In parallel, the push for international exchange, resulting in a vision of a distribution
network. Thirdly, the placement of the hypothetic network nodes along strategic intersections
of industrial and logistic infrastructure.
While the plan for Antwerp was in the end rejected in favour of more traditional housing
development, 80 years later the legacy of the relation between existing infrastructural flows
and logistics of documentation storage is highlighted by the data ports plan in Eemshaven.
Since private companies are the privileged actors in these types of projects, the circulation of
information increasingly respond to the same tenets that regulate the trade of coal or
electricity. The very different welcome that traditional politics reserve for Google data centres
is a symptom of a new dimension of power in which information infrastructure plays a vital
role. The celebrations and tax cuts that politicians lavish on these projects cannot be
explained with 150 jobs or economic incentives for a depressed region alone. They also
indicate how party politics is increasingly confined to the periphery of other forms of power
and therefore struggle to assure themselves a strategic positioning.
C. 025.45UDC; 161.225.22; 004.659GOO:004.021PAG.

The Universal Decimal Classification[21] system, developed by Paul Otlet and Henri
Lafontaine on the basis of the Dewey Decimal Classification system is still considered one of
their most important realizations as well as a corner-stone in Otlet's overall vision. Its
adoption, revision and use until today demonstrate a thoughtful and successful approach to
the classification of knowledge.

The UDC differs from Dewey and other bibliographic systems as it has the potential to
exceed the function of ordering alone. The complex notation system could classify phrases
and thoughts in the same way as it would classify a book, going well beyond the sole function
of classification, becoming a real language. One could in fact express whole sentences and
statements in UDC format[22]. The fundamental idea behind it [23]was that books and
documentation could be broken down into their constitutive sentences and boiled down to a
set of universal concepts, regulated by the decimal system. This would allow to express
objective truths in a numerical language, fostering international exchange beyond translation,
making science's work easier by regulating knowledge with numbers. We have to understand
the idea in the time it was originally conceived, a time shaped by positivism and the belief in
the unhindered potential of science to obtain objective universal knowledge. Today,
especially when we take into account the arbitrariness of the decimal structure, it sounds
doubtful, if not preposterous.
However, the linguistic-numeric element of UDC which enables to express fundamental
meanings through numbers, plays a key role in the oeuvre of Paul Otlet. In his work we learn
that numerical knowledge would be the first step towards a science of combining basic
sentences to produce new meaning in a systematic way. When we look at Monde, Otlet's
second publication from 1935, the continuous reference to multiple algebraic formulas that
describe how the world is composed suggests that we could at one point “solve” these
equations and modify the world accordingly.[24] Complementary to the Traité de
Documentation, which described the systematic classification of knowledge, Monde set the
basis for the transformation of this knowledge into new meaning.
Otlet wasn't the first to envision an algebra of thought. It has been a recurring topos in
modern philosophy, under the influence of scientific positivism and in concurrence with the
development of mathematics and physics. Even though one could trace it back to Ramon
Llull and even earlier forms of combinatorics, the first to consistently undertake this scientific
and philosophical challenge was Gottfried Leibniz. The German philosopher and
mathematician, a precursor of the field of symbolic logic, which developed later in the 20th
century, researched a method that reduced statements to minimum terms of meaning. He
investigated a language which “... will be the greatest instrument of reason,” for “when there
are disputes among persons, we can simply say: Let us calculate, without further ado, and
see who is right”.[25] His inquiry was divided in two phases. The first one, analytic, the
characteristica universalis, was a universal conceptual language to express meanings, of which
we only know that it worked with prime numbers. The second one, synthetic, the calculus
ratiocinator, was the algebra that would allow operations between meanings, of which there is
even less evidence. The idea of calculus was clearly related to the infinitesimal calculus, a
fundamental development that Leibniz conceived in the field of mathematics, and which
Newton concurrently developed and popularized. Even though not much remains of
Leibniz's work on his algebra of thought, it was continued by mathematicians and logicians in
the 20th century. Most famously, and curiously enough around the same time Otlet

P.174

P.175

published Traité and Monde, logician Kurt Godel used the same idea of a translation into
prime numbers to demonstrate his incompleteness theorem.[26] The fact that the characteristica
universalis only made sense in the fields of logics and mathematics is due to the fundamental
problem presented by a mathematical approach to truth beyond logical truth. While this
problem was not yet evident at the time, it would emerge in the duality of language and
categorization, as it did later with Otlet's UDC.
The relation between organizational and linguistic aspects of knowledge is also one of the
open issues at the core of web search, which is, at first sight, less interested in objective
truths. At the beginning of the Web, around the mid '90s, two main approaches to online
search for information emerged: the web directory and web crawling. Some of the first search
engines like Lycos or Yahoo!, started with a combination of the two. The web directory
consisted of the human classification of websites into categories, done by an “editor”; crawling
in the automatic accumulation of material by following links with different rudimentary
techniques to assess the content of a website. With the exponential growth of web content on
the Internet, web directories were soon dropped in favour of the more efficient automatic
crawling, which in turn generated so many results that quality has become of key importance.
Quality in the sense of the assessment of the webpage content in relation to keywords as well
as the sorting of results according to their relevance.
Google's hegemony in the field has mainly been obtained by translating the relevance of a
webpage into a numeric quantity according to a formula, the infamous PageRank algorithm.
This value is calculated depending on the relational importance of the webpage where the
word is placed, based on how many other websites link to that page. The classification part is
long gone, and linguistic meaning is also structured along automated functions. What is left is
reading the network formation in numerical form, capturing human opinions represented by
hyperlinks, i.e. which word links to which webpage, and which webpage is generally more
important. In the same way that UDC systematized documents via a notation format, the
systematization of relational importance in numerical format brings functionality and
efficiency. In this case rather than linguistic the translation is value-based, quantifying network
attention independently from meaning. The interaction with the other infamous Google
algorithm, Adsense, adds an economic value to the PageRank position. The influence and
profit deriving from how high a search result is placed, means that the relevance of a wordwebsite relation in Google search results translates to an actual relevance in reality.
Even though both Otlet and Google say they are tackling the task of organizing knowledge,
we could posit that from an epistemological point of view the approaches that underlie their
respective projects, are opposite. UDC is an example of an analytic approach, which
acquires new knowledge by breaking down existing knowledge into its components, based on
objective truths. Its propositions could be exemplified with the sentences “Logic is a
subdivision of Philosophy” or “PageRank is an algorithm, part of the Google search engine”.
PageRank, on the contrary, is a purely synthetic one, which starts from the form of the
network, in principle devoid of intrinsic meaning or truth, and creates a model of the

network's relational truths. Its propositions could be exemplified with “Wikipedia is of the
utmost relevance” or “The University of District Columbia is the most relevant meaning of
the word 'UDC'”.
We (and Google) can read the model of reality created by the PageRank algorithm (and all
the other algorithms that were added during the years[27]) in two different ways. It can be
considered a device that 'just works' and does not pretend to be true but can give results
which are useful in reality, a view we can call pragmatic, or instead, we can see this model as
a growing and improving construction that aims to coincide with reality, a view we can call
utopian. It's no coincidence that these two views fit the two stereotypical faces of Google, the
idealistic Silicon Valley visionary one, and the cynical corporate capitalist one.
From our perspective, it is of relative importance which of the two sides we believe in. The
key issue remains that such a structure has become so influential that it produces its own
effects on reality, that its algorithmic truths are more and more considered as objective truths.
While the utility and importance of a search engine like Google are out of the question, it is
necessary to be alert about such concentrations of power. Especially if they are only
controlled by a corporation, which, beyond mottoes and utopias, has by definition the single
duty of to make profits and obey its stakeholders.
1. A good account of such phenomenon is described by David Golumbia. http://www.uncomputing.org/?p=221
2. As described in the classic text looking at the ideological ground of Silicon Valley culture. http://www.hrc.wmin.ac.uk/theorycalifornianideology-main.html
3. For an account of Toffler's determinism, see http://www.ukm.my/ijit/IJIT%20Vol%201%202012/7wan%20fariza.pdf .
4. Otlet, Paul. Traité de documentation: le livre sur le livre, théorie et pratique. Editiones Mundaneum, 1934: 393-394.
5. http://gender.stanford.edu/news/2011/researcher-reveals-how-%E2%80%9Ccomputer-geeks%E2%80%9D-replaced-%
E2%80%9Ccomputergirls%E2%80%9D
6. This process has been named “heteromation”, for a more thorough analysis see: Ekbia, Hamid, and Bonnie Nardi.
“Heteromation and Its (dis)contents: The Invisible Division of Labor between Humans and Machines.” First Monday 19, no.
6 (May 23, 2014). http://firstmonday.org/ojs/index.php/fm/article/view/5331
7. The name scanops was first introduce by artist Andrew Norman Wilson when he found out about this category of workers
during his artistic residency at Google in Mountain View. See http://www.andrewnormanwilson.com/WorkersGoogleplex.html
.
8. As collected by Krissy Wilson on her http://theartofgooglebooks.tumblr.com .
9. http://informationobservatory.info/2015/10/27/google-books-fair-use-or-anti-democratic-preemption/#more-279
10. http://www.rtvnoord.nl/nieuws/139016/Keerpunt-in-de-geschiedenis-van-de-Eemshaven .
11. http://www.cnet.com/news/google-wants-dark-fiber/ .
12. http://spectrum.ieee.org/tech-talk/telecom/internet/google-new-brazil-us-internet-cable .
13. See Baran, Paul. “On Distributed Communications.” Product Page, 1964. http://www.rand.org/pubs/research_memoranda/
RM3420.html .
14. Pierce, Thomas. Mettre des pierres autour des idées. Paul Otlet, de Cité Mondiale en de modernistische stedenbouw in de jaren
1930. PhD dissertation, KULeuven, 2007: 34.
15. Ibid: 94-95.
16. Ibid: 113-117.
17. Otlet, Paul. Traité de documentation: le livre sur le livre, théorie et pratique. Editiones Mundaneum, 1934.
18. Otlet, Paul. Les Communications MUNDANEUM, Documentatio Universalis, doc nr. 8438
19. Van Acker, Wouter. “Internationalist Utopias of Visual Education: The Graphic and Scenographic Transformation of the
Universal Encyclopaedia in the Work of Paul Otlet, Patrick Geddes, and Otto Neurath.” Perspectives on Science 19, no. 1
(January 19, 2011): 68-69.
20. Ibid: 66.

P.176

P.177

21. The Decimal part in the name means that any records can be further subdivided by tenths, virtually infinitely, according to an
evolving scheme of depth and specialization. For example, 1 is “Philosophy”, 16 is “Logic”, 161 is “Fundamentals of Logic”,
161.2 is “Statements”, 161.22 is “Type of Statements”, 161.225 is “Real and ideal judgements”, 161.225.2 is “Ideal
Judgements” and 161.225.22 is “Statements on equality, similarity and dissimilarity”.
22. “The UDC and FID: A Historical Perspective.” The Library Quarterly 37, no. 3 (July 1, 1967): 268-270.
23. TEMP: described in french by the word depouillement,
24. Otlet, Paul. Monde, essai d’universalisme: connaissance du monde, sentiment du monde, action organisée et plan du monde.
Editiones Mundaneum, 1935: XXI-XXII.
25. Leibniz, Gottfried Wilhelm, The Art of Discovery 1685, Wiener: 51.
26. https://en.wikipedia.org/wiki/G%C3%B6del_numbering
27. A fascinating list of all the algorithmic components of Google search is at https://moz.com/google-algorithm-change .

Madame
C/
Mevrouw
C
FEMKE SNELTING

MADAME C.01
EN

When I arrived in Brussels that autumn, I was still very young. I thought that as an au-pair
I would be helping out in the house, but instead I ended up working with the professor on
finishing his book. At the time I arrived, the writing was done but his handwriting was so
hard to decipher that the printer had a difficult time working with the manuscript. It became
my job to correct the typeset proofs but often there were words that neither the printer nor I
could decipher, so we had to ask. And the professor often had no time for us. So I did my
best to make the text as comprehensible as possible.
On the title page of the final proofs from the printer, the professor wrote me:

After five months of work behind the same table, here it is. Now it is your turn to sow the
good seed of documentation, of institution, and of Mundaneum, through the pre-book and the
spoken word[1]
NL

Toen ik die herfst in Brussel arriveerde was ik nog heel jong. Ik dacht dat ik als au-pair in
de huishouding zou helpen, maar in plaats daarvan moest ik de professor helpen met het
afmaken van zijn boek. Toen ik aankwam was het schrijven al afgerond, maar de drukker
worstelde nog met het manuscript omdat het handschrift moeilijk te ontcijferen was. Het
werd mijn taak om de drukproeven te corrigeren. Er waren veel woorden die de drukker en
ik niet konden ontcijferen, dus dan moesten we het navragen. Maar vaak had de professor
geen tijd voor ons. Ik deed dan mijn best om de tekst zo leesbaar mogelijk te maken.
Op de titelpagina van de definitieve drukproef schreef de professor me:

P.178

P.179

Na vijf maanden gewerkt te hebben aan dezelfde tafel is hier het resultaat. Nu is het jouw
beurt om via het boek, het voor-boek, het woord, het goede zaad te zaaien van documentatie,
instituut en Mundaneum.[2]
MADAME C.02
EN

She serves us coffee from a ceramic coffee pot and also a cake bought at the bakery next
door. It's all written in the files she reminds us repeatedly, and tells us about one day in the
sixties, when her husband returned home, telling her excitedly that he discovered the
Mundaneum at Chaussée de Louvain in Brussels. Ever since, he would return to the same
building, making friends with the friends of the Palais Mondial, those dedicated caretakers of
the immense paper heritage.
I haven't been there so often myself, she says. But I do remember there were cats, to keep the
mice away from the paper. And my husband loved cats. So in the eighties, when he was
finally in a position to save the archives, the cats had to be taken care of too." Hij wanted to
write the cats were written into the inventory.
We finish our coffee and she takes us behind a curtain that separates the salon from a small
office. She shows us four green binders that contain the meticulously filed papers of her late
husband pertaining to the Mundaneum. In the third is the Donation act that describes the
transfer of the archives from the Friends of the Palais Mondial to the Centre de Lecture
Public of the French community.
In the inventory, the cats are nowhere to be found.[3]
NL

Ze schenkt ons koffie uit een keramieken koffiepot en serveert gebak dat ze bij de
naburige bakkerij kocht. Herhaaldelijk herinnert ze ons eraan dat 'het allemaal geschreven
staat in de documenten'. Ze vertelt ons dat in de jaren zestig, haar man op een dag
thuiskwam en opgewonden vertelde dat hij het Mundaneum ontdekt had op de Leuvense
Steenweg in Brussel. Sindsdien keerde hij daar regelmatig terug om de vrienden van het
Palais Mondial te ontmoeten: de toegewijde verzorgers van die immense papieren erfenis.
Ik ben er zelf niet zo vaak geweest, zegt ze. Maar ik herinner me dat er katten waren om de
muizen weg te houden van al het papier. En mijn man hield van katten. In de jaren tachtig,
toen hij eindelijk een positie had die hem in staat stelde om de archieven te redden, moest er
ook voor de katten worden gezorgd. Hij wilde de katten opnemen in de inventaris.
We drinken onze koffie op en ze neemt ons mee achter een gordijn dat de salon van een
klein kantoor scheidt. Ze toont ons vier groene mappen met de keurig geordende papieren
van haar voormalige echtgenoot over het Mundaneum. In de derde map bevindt zich de akte

die de overdracht van de archieven beschrijft van de Vrienden van het Palais Mondial aan
het Centre de Lecture Public van de Franse Gemeenschap (CLPCF).
In de inventaris is geen spoor van de katten te vinden.[4]
MADAME C.03
EN

In a margarine box, between thousands of notes, tickets, postcards, letters, all folded to the
size of an index card, we find this:

Paul, leave me the key to mythe house, I forgot mine. Put it on your desk, in the small index
card box.[5]
NL

In een grote margarinedoos, tussen duizenden bonnetjes, aantekeningen, briefkaarten, en
brieven, allemaal gevouwen op maat van een indexkaart, vinden we een bericht:

Paul, laat je de sleutel van mijnhet huis voor mij achter, ik ben de mijne vergeten. Stop hem in
het kleine indexkaartdoosje op je bureau.[6]

P.180

P.181

Last
Revision:
2·08·2016

1. EN
Wilhelmina Coops came from The Netherlands to Brussels in 1932 to learn French. She was instrumental in transforming
Le Traité de Documentation into a printed book.
2. NL
Wilhelmina Coops kwam in 1932 uit Nederland naar Brussel om Frans te leren. Ze hielp het manuscript voor Le Traité de
Documentation omzetten naar een gedrukt boek.
3. EN
The act is dated April 4 1985. Madame Canonne is a librarian, widow of André Canonne († 1990). She is custodian of
the documents relating to the wanderings of The Mundaneum in Brussels.
4. NL
De akte is gedateerd op 4 april 1985. Madame Canonne is bibliothecaresse en weduwe van André Canonne († 1990).
Ze is de bewaarster van documenten die gerelateerd zijn aan de omzwervingen van het Mundaneum in Brussel.
5. EN
Cato van Nederhasselt, second wife of Paul Otlet, collaborated with her husband on many projects. Her family fortune kept
the Mundaneum running after other sources had dried up.
6. NL
Cato van Nederhasselt, de tweede vrouw van Paul Otlet, werkte met haar man aan vele projecten. Nadat alle andere
bronnen waren uitgeput hield haar familiefortuin het Mundaneum draaiende.

A Preemptive
History
of the
Google
Cultural
Institute
GERALDINE JUÁREZ

I. ORGANIZING INFORMATION IS NEVER INNOCENT

Six years ago, Google, an Alphabet company, launched a new project: The Google Art
Project. The official history, the one written by Google and distributed mainly through
tailored press releases and corporate news bits, tells us that it all started as “a 20% project
within Google in 2010 and had its first public showing in 2011. It was 17 museums,
coming together in a very interesting online platform, to allow users to essentially explore art
in a very new and different way."[1] While Google Books faced legal challenges and the
European Commission launched its antitrust case against Google in 2010, the Google Art
Project, not coincidentally, scaled up gradually, resulting in the Google Cultural Institute with
headquarters in Paris, “whose mission is to make the world's culture accessible online.”[2]
The Google Cultural Institute is strictly divided in Art Project, Historical Moments and
World Wonders, roughly corresponding to fine art, world history and material culture.
Technically, the Google Cultural Institute can be described as a database that powers a
repository of high-resolution images of fine art, objects, documents and ephemera, as well as
information about and from their ‘partners’ - the public museums, galleries and cultural
institutions that provide this cultural material - such as 3D tour views and street-view maps.
So far and counting, the Google Cultural Institute hosts 177 digital reproductions of selected
paintings in gigapixel resolution and 320 3D versions of different objects, together with
multiple thematic slide shows curated in collaboration with their partners or by their users.

P.182

P.183

According to their website, in their ‘Lab’ they develop the “new technology to help partners
publish their collections online and reach new audiences, as seen in the Google Art Project,
Historic Moments and World Wonders initiatives.” These services are offered – not by
chance – as a philanthropic service to public institutions that increasingly need to justify their
existence in face of cuts and other managerial demands of the austerity policies in Europe
and elsewhere.
The Google Cultural Institute “would be unlikely, even unthinkable, absent the chronic and
politically induced starvation of publicly funded cultural institutions even throughout the
wealthy countries”[3]. It is important to understand that what Google is really doing is
bankrolling the technical infrastructure and labour needed to turn culture into data. In this
way it can be easily managed and feed all kind of products needed in the neoliberal city to
promote and exploit these cultural ‘assets’, in order to compete with other urban centres in
the global stage, but also, to feed Google’s unstoppable accumulation of information.
The head of the Google Cultural Institute knows there are a lot of questions about their
activities but Alphabet chose to label legitimate critiques as misunderstandings: “This is our
biggest battle, this constant misunderstanding of why the Cultural Institute actually exists.”[4]
The Google Cultural Institute, much like many other cultural endeavours of Google like
Google Books and their Digital Revolution art exhibition, has been subject to a few but
much needed critiques, such as Powered by Google: Widening Access and Tightening
Corporate Control (Schiller & Yeo 2014), an in-depth account of the origins of this cultural
intervention and its role in the resurgence of welfare capitalism, “where people are referred to
corporations rather than states for such services as they receive; where corporate capital
routinely arrogates to itself the right to broker public discourse; and where history and art
remain saturated with the preferences and priorities of elite social classes.”[5]
Known as one, if not the first essay that dissects Google's use of information and the rhetoric
of democratization behind it to reorganize cultural public institutions as a “site of profitmaking”, Schiller & Yeo’s text is fundamental to understand the evolution of the Google
Cultural Institute within the historical context of digital capitalism, where the global
dependency in communication and information technologies is directly linked to the current
crisis of accumulation and where Google's archive fever “evinces a breath-taking cultural and
ideological range.”[6]
II. WHO COLONIZES THE COLONIZERS?

The Google Cultural Institute is a complex subject of interest since it reflects the colonial
impulses embedded in the scientific and economic desires that formed the very collections
which the Google Cultural Institute now mediates and accumulates in its database.

Who colonizes the colonizers? It is a very difficult issue which I have raised before in an
essay dedicated to the Google Cultural Institute, Alfred Russel Wallace and the colonial
impulse behind archive fevers from the 19th but also the 21st century. I have no answer yet.
But a critique of the Google Cultural Institute where their motivations are interpreted as
merely colonialist, would be misleading and counterproductive. It is not their goal to slave and
exploit whole populations and its resources in order to impose a new ideology and civilise
barbarians in the same sense and way that European countries did during the Colonization.
Additionally, it would be unfair and disrespectful to all those who still have to deal with the
endless effects of Colonization, that have exacerbated with the expansion of economic
globalisation.
The conflation of technology and science that has produced the knowledge to create such an
entity as Google and its derivatives, such as the Cultural Institute, together with the scale of
its impact on a society where information technology is the dominant form of technology,
makes technocolonialism a more accurate term to describe Google's cultural interventions
from my perspective.
Although technocolonization shares many traits and elements with the colonial project,
starting with the exploitation of materials needed to produce information and media
technologies – and the related conflicts that this produces –, information technologies still
differ from ships and canons. However, the commercial function of maritime technologies is
the same as the free – as in free trade – services deployed by Google or Facebook’s drones
beaming internet in Africa, although the networked aspect of information technologies is
significantly different at the infrastructure level.
There is no official definition of technocolonialism, but it is important to understand it as a
continuation of the idea of Enlightenment that gave birth to the impulse to collect, organise
and manage information in the 19th century. My use of this term aims to emphasize and
situate contemporary accumulation and management of information and data within a
technoscientific landscape driven by “profit above else” as a “logical extension of the surplus
value accumulated through colonialism and slavery.”[7]
Unlike in colonial times, in contemporary technocolonialism the important narrative is not the
supremacy of a specific human culture. Technological culture is the saviour. It doesn’t matter
if the culture is Muslim, French or Mayan, the goal is to have the best technologies to turn it
into data, rank it, produce content from it and create experiences that can be monetized.
It only makes sense that Google, a company with a mission of to organise the world’s
information for profit, found ideal partners in the very institutions that were previously in
charge of organising the world’s knowledge. But as I pointed out before, it is paradoxical that
the Google Cultural Institute is dedicated to collect information from museums created under
Colonialism in order to elevate a certain culture and way of seeing the world above others.
Today we know and are able to challenge the dominant narratives around cultural heritage,

P.184

P.185

because these institutions have an actual record in history and not only a story produced for
the ‘about’ section of a website, like in the case of the Google Cultural Institute.
“What museums should perhaps do is make visitors aware that this is not the only way of
seeing things. That the museum – the installation, the arrangement, the collection – has a
history, and that it also has an ideological baggage”[8]. But the Google Cultural Institute is
not a museum, it is a database with an interface that enables to browse cultural content.
Unlike the prestigious museums it collaborates with, it lacks a history situated in a specific
cultural discourse. It is about fine art, world wonders and historical moments in a general
sense. The Google Cultural Institute has a clear corporate and philanthropic mission but it
lacks a point of view and a defined position towards the cultural material that it handles. This
is not surprising since Google has always avoided to take a stand, it is all techno-determinism
and the noble mission of organising the world’s information to make the world better. But
“brokering and hoarding information are a dangerous form of techno-colonialism.”[8]
Looking for a cultural narrative beyond the Californian ideology, Alphabet's search engine
found in Paul Otlet and the Mundaneum the perfect cover to insert their philanthropic
services in the history of information science beyond Silicon Valley. After all, they
understand that “ownership over the historical narratives and their material correlates
becomes a tool for demonstrating and realizing economic claims”.[9]
After establishing a data centre in the Belgian city of Mons, home of the Mundaneum
archive center, Google lent its support to "the Mons 2015 adventure, in particular by
working with our longtime partners, the Mundaneum archive. More than a century ago, two
visionary Belgians envisioned the World Wide Web’s architecture of hyperlinks and
indexation of information, not on computers, but on paper cards. Their creation was called
the Mundaneum.”[10]

On the occasion of the 147th birthday of Paul Otlet, a Doodle in the homepage of the
Alphabet spelled the name of its company using the ‘drawers of the Mundaneum’ to form the
words G O O G L E: “Today’s Doodle pays tribute to Paul’s pioneering work on the

Mundaneum. The collection of knowledge stored in the Mundaneum’s drawers are the
foundational work for everything that happens at Google. In early drafts, you can watch the
concept come to life.”[11]
III. GOOGLE CULTURAL HISTORY

The dematerialisation of public collections using infrastructure and services bankrolled by
private actors like the GCI needs to be questioned and analyzed further in the context of
heterotopic institutions, to understand the new forms taken by the endless tension between
knowledge/power at the core of contemporary archivalism, where the architecture of the
interface replaces and acts on behalf of the museum, and the body of the visitor is reduced to
the fingers of a user capable of browsing endless cultural assets.
At a time when cultural institutions should be decolonised instead of googlified, it is vital to
discuss a project such as the Google Cultural Institute and its continuous expansion – which
is inversely proportional to the failure of the governments and the passivity of institutions
seduced by gadgets[12].
However, the dialogue is fragmented between limited academic accounts, corporate press
releases, isolated artistic interventions, specialised conferences and news reports. Femke
Snelting suggests that we must “find the patience to build a relation to these histories in ways
that make sense.” To do so, we need to excavate and assemble a better account of the
history of the Google Cultural Institute. Building upon Schiller & Yeo’s seminal text, the
following timeline is my contribution to this task and an attempt to put together the pieces, by
situating them in a broader economic and political context beyond the official history told by
the Google Cultural Institute. A closer inspection of the events reveals that the escalation of

P.186

P.187

Alphabet's cultural interventions often emerge after a legal challenge against their economic
hegemony in Europe was initiated.
2009
ERIC SCHMIDT VISITS IRAQ

A news report from the Wall Street Journal[13] as well as an AP report on Youtube[14] confirm
the new Google venture in the field of historical collections. The executive chairman of
Alphabet declared: “I can think of no better use of our time and our resources to make the
images and ideas from your civilization, from the very beginning of time, available to a billion
people worldwide.”
A detailed account and reflection of this visit, its background and agenda can be found in
Powered by Google: Widening Access and Tightening Corporate Control. (Schiller & Yeo
2014)
FRANCE REACTS AGAINST GOOGLE BOOKS

In relation to the Google Books dispute in Europe, Reuters reported in 2009 that France's
ex-president Nicolas Sarkozy “pledged hundreds of millions of euros toward a separate
digitization program, saying he would not permit France to be “stripped of our heritage to the
benefit of a big company, no matter how friendly, big or American it is.”[15]

Although the reactionary and nationalistic agenda of Sarkozy should not be celebrated, it is
important to note that the first open attack on Google’s cultural agenda came from the French
government. Four years later, the Google Cultural Institute establishes its headquarters in
Paris.
2010
EUROPEAN COMMISSION LAUNCHES AN ANTITRUST INVESTIGATION AGAINST
GOOGLE.

The European Commission has decided to open an antitrust investigation into
allegations that Google Inc. has abused a dominant position in online search, in
violation of European Union rules (Article 102 TFEU). The opening of formal
proceedings follows complaints by search service providers about unfavourable
treatment of their services in Google's unpaid and sponsored search results coupled
with an alleged preferential placement of Google's own services. This initiation of
proceedings does not imply that the Commission has proof of any infringements. It
only signifies that the Commission will conduct an in-depth investigation of the case as
[16]
a matter of priority.
THE GOOGLE ART PROJECT STARTS AS A 20% PROJECT UNDER THE DIRECTION
OF AMIT SOOD.

According to the Guardian[17], and other news reports, Google's cultural project is started by
passionate art “googlers”.
GOOGLE ANNOUNCES ITS PLANS TO BUILD A EUROPEAN CULTURAL INSTITUTE IN
FRANCE

Referring to France as one of the most important centres for culture and technology, Google
CEO Eric Schmidt formally announces the creation of a centre "dedicated to technology,
especially noting the promotion of past, present and future European cultures."[18]
2011
GOOGLE ART PROJECT LAUNCHES IN TATE LONDON.

In February the new ‘product’ is officially presented. The introduction[19] emphasises that it
started as a 20% project, meaning a project that lacked corporate mandate.
According to the “Our Story”[20] section of the Google Cultural Institute, the history of the
Google Art Project starts with the integration of 140,000 assets from the Yad Vashem
World Holocaust Centre, followed by the inclusion of the Nelson Mandela Archives in the
Historical Moments section of the Google Cultural Institute.

P.188

P.189

Later in August, Eric Schmidt declares that education should bring art and science together
just like in “the glory days of the Victorian Era”.[21]
2012
EU DATA AUTHORITIES INITIATE A NEW INVESTIGATION INTO GOOGLE AND
THEIR NEW TERMS OF USE.

At the request of the French authorities, the European Union initiates an investigation against
Google, related to the breach of data privacy due to the new terms of use published by
Google on 1 March 2012.[22]
THE GOOGLE CULTURAL INSTITUTE CONTINUES TO DIGITALIZE CULTURAL
‘ASSETS’.

According to the Google Cultural Institute website, 151 partners join the Google Art
Project including France's Musée D’Orsay. The World Wonders section is launched
including partnerships with the likes of UNESCO. By October, the platform is rebranded
and re-launched including over 400+ partners.
2013
GOOGLE CULTURAL INSTITUTE HEADQUARTERS OPENS IN PARIS.

On 10 December, the new French headquarters open in 8 rue de Londres. The French
Minister Aurélie Filippetti cancels her attendance as she doesn’t “wish to appear as a
guarantee for an operation that still raises a certain number of questions."[23]
BRITISH TAX AUTHORITIES INITIATE INVESTIGATION INTO GOOGLE'S TAX
SCHEME

HM Customs and Revenue Committee inquiry brands Google's tax operations in the UK
via Ireland as "devious, calculated and, in my view, unethical".[24]
2014
EUROPEAN COURT OF JUSTICE RULES ON THE “RIGHT TO BE FORGOTTEN”
AGAINST GOOGLE.

The controversial ruling holds search engines responsible for the personal data that it handles
and under European Law the court ruled “that the operator is, in certain circumstances,
obliged to remove links to web pages that are published by third parties and contain
information relating to a person from the list of results displayed following a search made on

the basis of that person’s name. The Court makes it clear that such an obligation may also
exist in a case where that name or information is not erased beforehand or simultaneously
from those web pages, and even, as the case may be, when its publication in itself on those
pages is lawful.”[25]
DIGITAL REVOLUTION AT BARBICAN UK

Google sponsors the exhibition Digital Revolution[26] and commission artworks under the
brand “Dev-art: art made with code.[27]”. The exhibition later tours to the Tekniska Museet in
Stockholm.[28]
GOOGLE CULTURAL INSTITUTE'S “THE LAB” OPENS

“Here creative experts and technology come together share ideas and build new ways to
experience art and culture.”[29]
GOOGLE EXPRESSED ITS PLANS TO SUPPORT THE CITY OF MONS, EUROPEAN
CAPITAL OF CULTURE IN 2015.

A press release from Google[30] describes the new partnership with the Belgian city of Mons
as a result of their position as local employer and investor in the city, since one of their two
major data centres in Europe is located there.
2015
EU COMMISSION SENDS STATEMENT OF OBJECTIONS TO GOOGLE.

The European Commission has sent a Statement of Objections to Google alleging the
company has abused its dominant position in the markets for general internet search
services in the European Economic Area (EEA) by systematically favouring its own
[31]
comparison shopping product in its general search results pages.”

Google rejects the accusations as “wrong as a matter of fact, law and economics”.[32]
EUROPEAN COMMISSION STARTS INVESTIGATION INTO ANDROID.

The Commission will assess if, by entering into anticompetitive agreements and/or by
abusing a possible dominant position, Google has illegally hindered the development
and market access of rival mobile operating systems, mobile communication
applications and services in the European Economic Area (EEA). This investigation
is distinct and separate from the Commission investigation into Google's search
[33]
business.
GOOGLE CULTURAL INSTITUTE CONTINUES TO EXPAND.

According to the ‘Our Story’ section of the Google Cultural Institute, the Street Art project
now has 10,000 assets. A new extension displays art from the Google Art Project in the
Chrome browser and “art lovers can wear art on their wrists via Android art”. By August,

P.190

P.191

the project has more than 850 partners using their tools, 4.7 million assets in its collection
and more than 1500 curated exhibitions.
TRANSPARENCY INTERNATIONAL REVEALS GOOGLE AS SECOND BIGGEST
[34]
CORPORATE LOBBYISTS OPERATING IN BRUSSELS.

ALPHABET INC. IS ESTABLISHED ON OCTOBER 2ND.

“Alphabet Inc. (commonly known as Alphabet) is an American multinational conglomerate
created in 2015 as the parent company of Google and several other companies previously
owned by or tied to Google.”[35]
PAUL OTLET DOODLE AND MUNDANEUM-GOOGLE EXHIBITIONS.

Google creates a doodle for their homepage on the occasion of the 147th birthday of Paul
Otlet[36] and produces the slide shows Towards the Information Age, Mapping Knowledge
and The 100th Anniversary of a Nobel Peace Prize, all hosted by the Google Cultural
Institute.
“The Mundaneum and Google have worked closely together to curate 9 exclusive online
exhibitions for the Google Cultural Institute. The team behind the reopening of the

Mundaneum this year also worked with the Cultural Institute engineers to launch a dedicated
mobile app.”[37]
GOOGLE CULTURAL INSTITUTE PARTNERS WITH THE BRITISH MUSEUM.

The British Museum announce a “unique partnership” where over 4,500 assets can be
“seen online in just a few clicks”. In the official press release, the director of the museum,
Neil McGregor, said “The world today has changed, the way we access information has
been revolutionised by digital technology. This enables us to gives the Enlightenment ideal
on which the Museum was founded a new reality. It is now possible to make our collection
accessible, explorable and enjoyable not just for those who physically visit, but to everybody
with a computer or a mobile device. ”[38]
GOOGLE CULTURAL INSTITUTE ADDS A PERFORMING ARTS SECTION.

Over 60 performing arts (dance, drama, music, opera) organizations and performers join the
assets collection of the Google Cultural Institute [39]
2016
CODA

The Google Culture Institute has quietly changed the name of its platform to “Google Art &
Culture”. The website has been also restructured, and categories have been simplified into
“Arts”, “History” and “Wonders”. Its partners and projects are placed at the top of their
“Menu”. It is now possible to browse artists and mediums trough time and by color. The site

P.192

P.193

offers a daily digest of art and history, but also cityscapes, galleries and street art views are
only one click away.

An important aspect of this make-over is the way in which it reveals its own instability as a
cultural archive. Before the upgrade, the link http://www.google.com/culturalinstitute/assetviewer/text-as-set-cell-0?exhibitId=QQ-RRh0A would take you to "The origins of the
Internet in Europe”, the page dedicated to the Mundaneum and Paul Otlet. Now it takes
you to a 404 error page. No timestamp, no redirect. No archived copy recorded in the
Wayback Machine. The structure of the new link for this "exhibition" still hints at some sort
of beta state: https://www.google.com/culturalinstitute/beta/exhibit/QQ-RRh0A . How
long can we rely on this cultural institute/beta link?
Should the “curator of the world”[40], as Amit Sood is described in media, take some
responsibility over the reliability of the structure in which Google Arts & Culture displays the
cultural material extracted from public institutions and, that unlike Google, need to do so by
mandate? Or should we all just take his word and look away: “I fell into this by mistake.”[41]?

Last
Revision:
2·08·2016

1. Caines, Matthew. “Arts head: Amit Sood, director, Google Cultural Institute” The Guardian. Dec 3, 2013. http://
www.theguardian.com/culture-professionals-network/culture-professionals-blog/2013/dec/03/amit sood-google-culturalinstitute-art-project
2. Google Paris. Accessed Dec 22, 2016 http://www.google.se/about/careers/locations/paris/

3. Schiller, Dan & Yeo, Shinjoung. “Powered By Google: Widening Access And Tightening Corporate Control.” (In Aceti, D.
L. (Ed.). Red Art: New Utopias in Data Capitalism: Leonardo Electronic Almanac, Vol. 20, No. 1. London: Goldsmiths
University Press. 2014):48
4. Down, Maureen. “The Google Art Heist”. The New York Times. Sept 12, 2015 http://www.nytimes.com/2015/09/13/
opinion/sunday/the-google-art-heist.html
5. Schiller, Dan & Shinjoung Yeo. “Powered By Google: Widening Access And Tightening Corporate Control.”, 48
6. Schiller, Dan & Yeo, Shinjoung. “Powered By Google: Widening Access And Tightening Corporate Control.”, 48
7. Davis, Heather & Turpin, Etienne, eds. Art in the Antropocene (London: Open Humanities Press. 2015), 7
8. Bush, Randy. Psg.com On techno-colonialism. (blog) June 13, 2015. Accessed Dec 22, 2015 https://psg.com/ontechnocolonialism.html
9. Starzmann, Maria Theresia. “Cultural Imperialism and Heritage Politics in the Event of Armed Conflict: Prospects for an
‘Activist Archaeology’”. Archeologies. Vol. 4 No. 3 (2008):376
10. Echikson, William. Partnering in Belgium to create a capital of culture (blog) March 20, 2014. Accessed Dec 22, 2015
http://googlepolicyeurope.blogspot.se/2014/03/partnering-in-belgium-to-create-capital.html
11. Google. Mundaneum co-founder Paul Otlet's 147th Birthday (blog) August 23, 2015. Accessed Dec 22, 2015 http://
www.google.com/doodles/mundaneum-co-founder-paul-otlets-147th-birthday
12. eg. https://www.google.com/culturalinstitute/thelab/#experiments
13. Lavallee, Andrew. “Google CEO: A New Iraq Means Business Opportunities.” Wall Street Journal. Nov 24, 2009 http://
blogs.wsj.com/digits/2009/11/24/google-ceo-a-new-iraq-means-business-opportunities/
14. Associated Press. Google Documents Iraqi Museum Treasures (on-line video November 24, 2009) https://
www.youtube.com/watch?v=vqtgtdBvA9k
15. Jarry, Emmanuel. “France's Sarkozy takes on Google in books dispute.” Reuters. December 8, 2009. http://
www.reuters.com/article/us-france-google-sarkozy-idUSTRE5B73E320091208
16. European Commission. Antitrust: Commission probes allegations of antitrust violations by Google (Brussels 2010) http://
europa.eu/rapid/press-release_IP-10-1624_en.htm
17. Caines, Matthew. “Arts head: Amit Sood, director, Google Cultural Institute” The Guardian. December 3, 2013. http://
www.theguardian.com/culture-professionals-network/culture-professionals-blog/2013/dec/03/amit sood google-culturalinstitute-art-project
18. Cyrus, Farivar. "Google to build R&D facility and 'European cultural center' in France.” Deutsche Welle. September 9,
2010. http://www.dw.com/en/google-to-build-rd-facility-and-european-cultural-center-in-france/a-5993560
19. Google Art Project. Art Project V1 - Launch Event at Tate Britain. (on-line video February 1, 2011) https://
www.youtube.com/watch?v=NsynsSWVvnM
20. Google Cultural Institute. Accessed Dec 18, 2015. https://www.google.com/culturalinstitute/about/partners/
21. Robinson, James. “Eric Schmidt, chairman of Google, condemns British education system” The Guardian. August 26, 2011
http://www.theguardian.com/technology/2011/aug/26/eric-schmidt-chairman-google-education
22. European Commission. Letter addressed to Google by the Article 29 Group (Brussels 2012) http://ec.europa.eu/justice/
data-protection/article-29/documentation/other-document/files/2012/20121016_letter_to_google_en.pdf
23. Willsher, Kim. “Google Cultural Institute's Paris opening snubbed by French minister.” The Guardian. December 10, 2013
http://www.theguardian.com/world/2013/dec/10/google-cultural-institute-france-minister-snub
24. Bowers, Simon & Syal, Rajeev. “MP on Google tax avoidance scheme: 'I think that you do evil'”. The Guardian. May 16,
2013. http://www.theguardian.com/technology/2013/may/16/google-told-by-mp you-do-do-evil
25. Court of Justice of the European Union. Press-release No 70/14 (Luxembourg, 2014) http://curia.europa.eu/jcms/upload/
docs/application/pdf/2014-05/cp140070en.pdf
26. Barbican. “Digital Revolution.” Accessed December 15, 2015 https://www.barbican.org.uk/bie/upcoming-digital-revolution
27. Google. “Dev Art”. Accessed December 15, 2015 https://devart.withgoogle.com/
28. Tekniska Museet. “Digital Revolution.” Accessed December 15, 2015 http://www.tekniskamuseet.se/1/5554.html
29. Google Cultural Institute. Accessed December 15, 2015. https://www.google.com/culturalinstitute/thelab/
30. Echikson,William. Partnering in Belgium to create a capital of culture (blog) March 20, 2014. Accessed Dec 22, 2015
http://googlepolicyeurope.blogspot.se/2014/03/partnering-in-belgium-to-create-capital.html
31. European Commission. Antitrust: Commission sends Statement of Objections to Google on comparison shopping service;
opens separate formal investigation on Android. (Brussels 2015) http://europa.eu/rapid/press-release_IP-15-4780_en.htm
32. Yun Chee, Foo. “Google rejects 'unfounded' EU antitrust charges of market abuse” Reuters. (August 27, 2015) http://
www.reuters.com/article/us-google-eu-antitrust-idUSKCN0QW20F20150827
33. European Commission. Antitrust: Commission sends Statement

P.194

P.195

34. Transparency International. Lobby meetings with EU policy-makers dominated by corporate interests (blog) June 24, 2015.
Accessed December 22, 2015. http://www.transparency.org/news/pressrelease/
lobby_meetings_with_eu_policy_makers_dominated_by_corporate_interests
35. Wikipedia, The Free Encyclopedia. s.v. “Alphabet Inc,” (accessed Jan 25, 2016, https://en.wikipedia.org/wiki/Alphabet_Inc
.
36. Google. Mundaneum co-founder Paul Otlet's 147th Birthday (blog) August 23, 2015. Accessed Dec 22, 2015 http://
www.google.com/doodles/mundaneum-co-founder-paul-otlets-147th-birthday
37. Google. Mundaneum co-founder Paul Otlet's 147th Birthday
38. The British Museum. The British Museum’s unparalleled world collection at your fingertips. (blog) November 12, 2015.
Accessed December 22, 2015. https://www.britishmuseum.org/about_us/news_and_press/press_releases/2015/
with_google.aspx
39. Sood, Amit. Step on stage with the Google Cultural Institute (blog) December 1, 2015. Accessed December 22, 2015.
https://googleblog.blogspot.se/2015/12/step-on-stage-with-google-cultural.html
40. Sam Sundberg “Världsarvet enligt Google”. Svenska Dagbladet. March 27, 2016 http://www.svd.se/varldsarvet-enligtgoogle
41. TED. Amit Sood: Every piece of art you've ever wanted to see up close and searchable. (on-line video February 2016)
https://www.ted.com/talks/amit_sood_every_piece_of_art_you_ve_ever_wanted_to_see_up_close_and_searchable/

Une
histoire
préventive
du
Google
Cultural
Institute
GERALDINE JUÁREZ

I. L'ORGANISATION DE L'INFORMATION N'EST JAMAIS
INNOCENTE

Il y a six ans, Google, une entreprise d'Alphabet a lancé un nouveau projet : le Google Art
Project. L'histoire officielle, celle écrite par Google et distribuée principalement à travers des
communiqués de presse sur mesure et de brèves informations commerciales, nous dit que
tout a commencé « en 2010, avec un projet ou Google intervenait à 20%, qui fut présenté
au public pour la première fois en 2011. Il s'agissait de 17 musées réunis dans une
plateforme en ligne très intéressante afin de permettre aux utilisateurs de découvrir l'art d'une
manière tout à fait nouvelle et différente. »[1] Tandis que Google Books faisait face à des
problèmes d'ordre légal et que la Commission européenne lançait son enquête antitrust
contre Google en 2010, le Google Art Project prenait, non pas par hasard, de l'ampleur.
Cela conduisit à la création du Google Art Institute dont le siège se trouve à Paris et « dont
la mission est de rendre la culture mondiale accessible en ligne ».[2]
Le Google Cultural Institute est clairement divisé en sections : Art Project, Historical
Moments et World Wonders. Cela correspond dans les grandes lignes à beaux-arts, histoire
du monde et matériel culturel. Techniquement, le Google Cultural Institute peut être décrit
comme une base de données qui alimente un dépositaire d'images haute résolution
représentant des objets d'art, des objets, des documents, des éphémères ainsi que
d'informations à propos, et provenant, de leurs « partenaires » - les musées publics, les

P.196

P.197

galeries et les institutions culturelles qui offrent ce matériel culturel -des visites en 3D et des
cartes faites à partir de "street view". Pour le moment, le Google Cultural Institute compte
177 reproductions numériques d'une sélection de peintures dans une résolution de l'ordre
des giga pixels et 320 différents objets en 3D ainsi que de multiples diapositives thématiques
choisies en collaboration avec leurs partenaires ou par leurs utilisateurs.
Selon leur site, dans leur « Lab », ils développent une « nouvelle technologie afin d'aider
leurs partenaires à publier leurs collections en ligne et à toucher de nouveaux publics, comme
l'ont fait les initiatives du Google Art Project, Historic Moments et Words Wonders. » Ce
n'est pas un hasard que ces services soient proposés comme une oeuvre philanthropique aux
institutions publiques qui sont de plus en plus amenées à justifier leur existence face aux
réductions budgétaires et aux autres exigences en matière de gestion des politiques
d'austérité en Europe et ailleurs. « Il est peu probable et même impensable que [le Google
Cultural Institute] fasse disparaitre la famine chronique des institutions culturelles de service
public causée par la politique et présente même dans les pays riches »[3]. Il est important de
comprendre que Google est réellement en train de financer l'infrastructure technique et le
travail nécessaire à la transformation de la culture en données. De cette manière, Google
s'assure que la culture peut être facilement gérée et nourrir toute sortes de produits
nécessaires à la ville néolibérale, afin de promouvoir et d'exploiter ces « biens » culturels, et
de soutenir la compétition avec d'autres centres urbains au niveau mondial, mais également
l'instatiable apétit d'informations de Google.
Le dirigeant du Google Cultural Institute est conscient qu'il existe un grand nombre
d'interrogations autour de leurs activités, cependant, Alphabet a choisi d'appeler les critiques
légitimes: des malentendus ; « Notre plus grand combat est ce malentendu permanent sur les
raisons de l'existence du Cultural Institute »[4] Le Google Cultural Institute, comme beaucoup
d'autres efforts culturels de Google, tels que Google Books et leur exposition artistique
Digital Revolution, a été le sujet de quelques critiques bien nécessaires, comme Powered by
Google: Widening Access and Tightening Corporate Control (Schiller & Yeo 2014); un
compte rendu détaillé des origines de cette intervention culturelle et de son rôle dans la
résurgence du capitalisme social: « là où les gens sont renvoyés aux corporations plutôt
qu'aux États pour des services qu'ils reçoivent ; là où le capital des entreprises a l'habitude
de se donner le droit de négocier le discours public ; et où l'histoire et l'art restent saturés par
les préférences et les priorités des classes de l'élite sociale. »[5]
Connu comme l'un, peut-être le seul essai d'analyse de l'utilisation des informations par
Google et de la rhétorique de démocratisation se trouvant en amont pour réorganiser les
institutions publiques culturelles en un « espace de profit », le texte de Schiller & Yeo est
fondamental pour la compréhension de l'évolution du Google Cultural Institute dans le
contexte historique du capitalisme numérique, où la dépendance mondiale aux technologies
de l'information est directement liée à la crise actuelle d'accumulation et, où la fièvre
d'archivage de Google « évince sa portée culturelle et idéologique à couper le souffle ».[6]

II. QUI COLONISE LES COLONS ?

Le Google Cultural Institute est un sujet de débat intéressant puisqu'il reflète les pulsions
colonialistes ancrées dans les désirs scientifiques et économiques qui ont formé ces mêmes
collections que le Google Cultural Institute négocie et accumule dans sa base de données.
Qui colonise les colons ? C'est une problématique très difficile que j'ai soulevée
précédemment dans un essai dédié au Google Cultural Institute, Alfred Russel Wallace et
les pulsions colonialistes derrière les fièvres d'archivage du 19e et du 20e siècles. Je n'ai pas
encore de réponse. Pourtant, une critique du Google Cultural Institute dans laquelle ses
motivations sont interprétées comme simplement colonialistes serait trompeuse et contreproductive. Leur but n'est pas d'asservir et d'exploiter la population tout entière et ses
ressources afin d'imposer une nouvelle idéologie et de civiliser les barbares dans la même
optique que celle des pays européens durant la colonisation. De plus, cela serait injuste et
irrespectueux vis-à-vis de tous ceux qui subissent encore les effets permanents de la
colonisation, exacerbés par l'expansion de la mondialisation économique.
Selon moi, l'assemblage de la technologie et de la science qui a produit le savoir à l'origine
de la création d'entités telles que Google et de ses dérivés, comme le Cultural Institute; ainsi
que la portée de son impact sur une société où la technologie de l'information est la forme de
technologie dominante, font de "technocolonialisme" un terme plus précis pour décrire les
interventions culturelles de Google. Même si la technocolonilisation partage de nombreux
traits et éléments avec le projet colonial, comme l'exploitation des matériaux nécessaires à la
production d'informations et de technologies médiatiques - ainsi que les conflits qui en
découlent - les technologies de l'information sont tout de même différentes des navires et des
canons. Cependant, la fonction commerciale des technologies maritimes est identique aux
services libres - comme dans libre échange - déployés par les drones de Google ou Facebook
qui fournissent internet à l'Afrique, même si la mise en réseau des technologies de
l'information est largement différent en matière d'infrastructure.
Il n'existe pas de définition officielle du technocolonialisme, mais il est important de le
comprendre comme une continuité des idées des Lumières qui a été à l'origine du désir de
rassembler, d'organiser et de gérer les informations au 19e siècle. Mon utilisation de ce
terme a pour objectif de souligner et de situer l'accumulation contemporaine, ainsi que la
gestion de l'information et des données au sein d'un paysage scientifique dirigé par l'idée « du
profit avant tout » comme une « extension logique de la valeur du surplus accumulée à
travers le colonialisme et l'esclavage ».[7]
Contrairement à l'époque coloniale, dans le technocolonialisme contemporain, la narration
n'est pas la suprématie d'une culture humaine spécifique. La culture technologique est le
sauveur. Peu importe que vous soyez musulman, français ou maya, l'objectif est d'obtenir les
meilleures technologies pour transformer la vie en données, les classifier, produire un contenu
à partir de celles-ci et créer des expériences pouvant être monétisées.

P.198

P.199

En toute logique, pour Google, une entreprise dont la mission est d'organiser les informations
du monde en vue de générer un profit, les institutions qui étaient auparavant chargées de
l'organisation de la connaissance du monde constituent des partenaires idéaux. Cependant,
comme indiqué plus tôt, l'engagement du Google Cultural Institute à rassembler les
informations des musées créés durant la période coloniale afin d'élever une certaine culture et
une manière supérieure de voir le monde est paradoxal. Aujourd'hui, nous sommes au
courant et nous sommes capables de défier les narrations dominantes autour du patrimoine
culturel, car ces institutions ont un véritable récit de l'histoire qui ne se limite pas à la
production de la section « à propos » d'un site internet, comme celui du Google Cultural
Institute. « Ce que les musées devraient peut-être faire, c'est amener les visiteurs à prendre
conscience que ce n'est pas la seule manière de voir les choses. Que le musée, à savoir
l'installation, la disposition et la collection, possède une histoire et qu'il dispose également
d'un bagage idéologique »[8]. Cependant, le Google Cultural Institute n'est pas un musée,
c'est une base de données disposant d'une interface qui permet de parcourir le contenu
culturel. Contrairement aux prestigieux musées avec lesquels il collabore, il manque d'une
histoire située dans un discours culturel spécifique. Il s'agit d'objets d'art, de merveilles du
monde et de moments historiques au sens large. La mission du Google Cultural Institute est
clairement commerciale et philanthropique, mais celui-ci manque d'un point de vue et d'une
position définie vis-à-vis du matériel culturel qu'il traite. Ce n'est pas surprenant puisque
Google a toujours évité de prendre position, tout est question de technodéterminisme et de la
noble mission d'organiser les informations du monde afin de le rendre meilleur. Cependant,
« la négociation et le rassemblement d'informations sont une forme dangereuse de
technocolonialisme ».[8]
En cherchant une narration culturelle dépassant l'idéologie californienne, le moteur de
recherche d'Alphabet a trouvé dans Paul Otlet et le Mundaneum la couverture parfaite pour
intégrer ses services philanthropiques dans l'histoire de la science de l'information, au-delà de
la Silicon Valley. Après tout, ils comprennent que « la possession des narrations historiques
et de leurs corrélats matériels devient un outil de manifestation et de réalisation des
revendications économiques ».[9]
Après avoir établi un centre de données dans la ville belge de Mons, ville du Mundaneum,
Google a offert son soutien à « l'aventure Mons 2015, en particulier en travaillant avec nos
partenaires de longue date, les archives du Mundaneum. Plus d'un siècle auparavant, deux
visionnaires belges ont imaginé l'architecture du World Wide Web d'hyperliens et

d'indexation de l'information, non pas sur des ordinateurs, mais sur des cartes de papier.
Leur création était appelée Mundaneum. »[10]

À l'occasion du 147e anniversaire de Paul Otlet, un Doodle sur la page d'Alphabet épelait
le nom de son entreprise en utilisant « les tiroirs du Mundaneum » pour former le mot G O
O G L E : « Aujourd'hui, Doodle rend hommage au travail pionnier de Paul sur le
Mundaneum. La collection de connaissances emmagasinées dans les tiroirs du Mundaneum
constituent un travail fondamental pour tout ce qui se fait chez Google. Dès les premiers
essais, vous pouvez voir ce concept prendre vie. »[11]
III. GOOGLE CULTURAL INSTITUTE

La dématérialisation des collections publiques à l'aide d'une infrastructure et de services
financés par des acteurs privés, tels que le GCI, doit être questionnée et analysée plus en
profondeur par des institutions hétérotopes pour comprendre les nouvelles formes prises par
une tension infinie entre connaissance/pouvoir au cœur d'un archivage contemporain, où
l'architecture de l'interface remplace et agit au nom du musée et où le visiteur est réduit aux
doigts d'un utilisateur capable de parcourir un nombre infini de biens culturels. À l'époque
où les institutions culturelles devraient être décolonisées plutôt que googlifiées, il est capital
d'aborder la question d'un projet tel que le Google Cultural Institute et son expansion
continue et inversement proportionnelle à l'échec des gouvernements et à la passivité des
institutions séduites par les gadgets[12].
Cependant, le dialogue est fragmenté entre les comptes rendus académiques, les
communiqués de presse, les interventions artistiques isolées, les conférences spécialisées et
les bulletins d'informations. Selon Femke Snelting, nous devons « trouver la patience de
construire une relation à ces théories de manière cohérente ». Pour ce faire, nous devons
approfondir et assembler un meilleur compte rendu de l'histoire du Google Cultural Institute.
Construite à partir du texte phare de Schiller & Yeo, la ligne du temps suivante est ma
contribution à cette tâche et à une tentative d'assembler des morceaux en les situant dans un
contexte politique et économique plus large allant au-delà de l'histoire officielle racontée par

P.200

P.201

le Google Cultural Institute. Une inspection plus minutieuse des événements révèle que
l'escalade des interventions culturelles d'Alphabet se produit généralement après l'apparition
d'un défi juridique pour l'hégémonie économique en Europe.
2009
ERIC SCHMIDT VISITE L'IRAK

Un bulletin d'informations du Wall Street Journal[13] ainsi qu'un rapport de l'AP Youtube[14]
confirment le nouveau projet de Google dans le domaine de collections historiques. Le
président exécutif d'Alphabet déclare : « je ne peux pas imaginer une meilleure manière
d'utiliser notre temps et nos ressources qu'en rendant disponibles les images et les idées de
notre civilisation, depuis son origine, pour un milliard de personnes à travers le monde. »
Un compte rendu détaillé de la réflexion de cette visite, son contexte et son programme se
trouvent dans Powered by Google: Widening Access and Tightening Corporate Control.
(Schiller & Yeo 2014)
LA FRANCE RÉAGIT À L'ENCONTRE DE GOOGLE BOOKS

Concernant le conflit impliquant Google Books en Europe, Reuters a déclaré qu'en 2009,
l'ancien président français, Nicolas Sarkozy « avait promis des centaines de millions d'euros à
un programme de numérisation distinct, disant qu'il ne permettrait pas à la France “d'être

dépouillée de son patrimoine au profit d'une grande entreprise, peu importe si celle-ci était
sympathique, grande ou américaine.” »[15]
Cependant, même si le programme réactionnaire et nationaliste de Nicolas Sarkozy ne doit
pas être félicité, il est important de noter que la première attaque ouverte à l'encontre du
programme culturel de Google est venue du gouvernement français. Quatre ans plus tard, le
Google Cultural Institute établissait son siège à Paris.
2010
LA COMMISSION EUROPÉENNE LANCE UNE ENQUÊTE ANTITRUST À L'ENCONTRE DE
GOOGLE.

La Commission européenne a décidé d'ouvrir une enquête antitrust à partir des
allégations selon lesquelles Google Inc. aurait abusé de sa position dominante de
moteur de recherche, en violation avec le règlement de l'Union européenne (Article
102 TFUE). L'ouverture de procédures formelles fait suite aux plaintes déposées par
des fournisseurs de service de recherche relatives à un traitement défavorable de leurs
services dans les résultats de recherche gratuits et payants de Google, ainsi qu'au
placement préférentiel des propres services de Google. Le lancement des procédures ne
signifie pas que la Commission dispose d'une quelconque preuve d'infraction. Cela
signifie seulement que la Commission va mener une enquête poussée et prioritaire sur
[16]
l'affaire.
LE GOOGLE ART PROJECT A COMMENCÉ COMME PROJET 20 % SOUS LA DIRECTION
D'AMIT SOOD.

D'après The Guardian[17], ainsi que d'autres bulletins d'informations, le projet culturel de
Google a été lancé par des « googleurs » passionnés d'art.
GOOGLE ANNONCE SON PROJET DE CONSTRUCTION D'UN EUROPEAN CULTURAL
CENTER EN FRANCE.

Faisant référence à la France comme à l'un des plus importants centres pour la culture et la
technologie, le PDG de Google, Eric Schmidt, a annoncé officiellement la création d'un
centre « dédié à la technologie, particulièrement en faveur de la promotion des cultures
européennes passées, présentes et futures ».[18]
2011
LE GOOGLE ART PROJECT EST LANCÉ À LA TATE LONDON.

En février, le nouveau « produit » a été officiellement présenté. La présentation[19] souligne
que l'idée a commencé avec un projet 20 %, un projet qui n'émanait donc pas d'un mandat
d'entreprise.

P.202

P.203

D'après la section « Our Story »[20] du Google Cultural Institute, l'histoire du Google Art
Project commence avec l'intégration de 140 000 pièces du Yad Vashem World Holocaust
Centre, suivie de l'intégration des archives de Nelson Mandela dans la section "Historical
Moments" du Google Cultural Institute.
Plus tard au mois d'août, Eric Schmidt déclara que l'éducation devrait rassembler l'art et la
science comme lors des « jours glorieux de l'époque victorienne ».[21]
2012
LES AUTORITÉS DES DONNÉES DE L'UE LANCENT UNE NOUVELLE ENQUÊTE SUR
GOOGLE ET SES NOUVEAUX TERMES D'UTILISATION.

À la demande des autorités françaises, l'Union européenne lance une enquête à l'encontre
de Google concernant une violation des données privées causée par les nouveaux termes
d'utilisation publiés par Google le 1er mars 2012.[22]
LE GOOGLE CULTURAL INSTITUTE CONTINUE À NUMÉRISER LES « BIENS »
CULTURELS.

D'après le site du Google Cultural Institute, 151 partenaires ont rejoint le Google Art
Project, y compris le Musée d'Orsay en France. La section World of Wonders est lancée
avec des partenariats comme celui de l'UNESCO. Au mois d'octobre, la plateforme avait
changé d'image et était relancée avec plus de 400 partenaires.
2013
LE SIÈGE DU GOOGLE CULTURAL INSTITUTE OUVRE À PARIS.

Le 10 décembre, le nouveau siège français ouvre au numéro 8 rue de Londres. La ministre
française, Aurélie Filippetti, annule sa participation à l'événement, car elle « ne souhaite pas
apparaitre comme une garantie à une opération qui soulève encore un certain nombre de
questions ».[23]
LES AUTORITÉS FISCALES BRITANNIQUES LANCENT UNE ENQUÊTE SUR LE PLAN
FISCAL DE GOOGLE.

L'enquêteur du HM Customs and Revenue Committee estime que les opérations fiscales de
Google au Royaume-Uni réalisées via l'Irlande sont « fourbes, calculées et, selon moi,
contraires à l'éthique ».[24]

2014
CONCERNANT LE « DROIT À L'OUBLI », LA COUR DE JUSTICE DE L'UE STATUE
CONTRE GOOGLE.

La décision controversée tient les moteurs de recherche responsables des données
personnelles qu'ils gèrent. Conformément à la loi européenne, la Cour a statué « que
l'opérateur est, dans certaines circonstances, obligé de retirer des liens vers des sites internet
publiés par un parti tiers et contenant des informations liées à une personne et apparaissant
dans la liste des résultats suite à une recherche basée sur le nom de cette personne. La Cour
établit clairement qu'une telle obligation peut également exister dans un cas où le nom, ou
l'information, n'est pas effacé préalablement de ces pages internet, et même, comme cela peut
être le cas, lorsque leur publication elle-même est légale. »[25]
RÉVOLUTION NUMÉRIQUE AU BARBICAN, ROYAUME-UNI

Google sponsorise l'exposition Digital Revolution[26] et les œuvres commissionnées sous le
nom « Dev-art: art made with code.[27] ». Le Tekniska Museet à Stockholm a ensuite
accueilli l'exposition.[28] « The Lab » du Google Cultural Institute ouvre « Ici, les experts
créatifs et la technologie se rassemblent pour partager des idées et construire de nouvelles
manières de profiter de l'art et de la culture. »[29]
GOOGLE FAIT CONNAITRE SON INTENTION DE SOUTENIR LA VILLE DE MONS,
CAPITALE EUROPÉENNE DE LA CULTURE EN 2015.

Un communiqué de presse de Google[30] décrit le nouveau partenariat avec la ville belge de
Mons comme le résultat de leur position d'employeur local et d'investisseur dans la ville où
se situe l'un de leurs deux principaux centres de données en Europe.
2015
LA COMMISSION DE L'UE ENVOIE UNE COMMUNICATION DES GRIEFS À GOOGLE.

La Commission européenne a envoyé une communication des griefs à Google, déclarant
que :
« l'entreprise avait abusé de sa position dominante sur les marchés des services
généraux de recherches internet dans l'espace économique européen en favorisant
systématiquement son propre produit de comparateur d'achats dans les pages de
[31]
résultats généraux de recherche. »

Google rejette les accusations, les jugeant « erronées d'un point de vue factuel, légal et
économique ».[32]

P.204

P.205

LA COMMISSION EUROPÉENNE COMMENCE À ENQUÊTER SUR ANDROID.

La Commission déterminera si, en concluant des accords anti-compétitifs et/ou en abusant
d'une possible position dominante, Google a :
illégalement entravé le développement et l'accès au marché de systèmes mobiles
d'exploitation, d'applications mobiles de communication et des services de ses rivaux
dans l'espace économique européen. Cette enquête est distincte et séparée du travail
[33]
d'investigation sur le commerce de la recherche de Google.
LE GOOGLE CULTURAL INSTITUTE POURSUIT SON EXPANSION.

D'après la section « Our Story » du Google Cultural Institute, le projet Street Art contient à
présent 10 000 pièces. Une nouvelle extension affiche les oeuvres d'art du Google Art
Project dans le navigateur Chrome et « les amateurs d'art peuvent porter une œuvre au
poignet grâce à l'art Android ». Au mois d'août, le projet disposait de 850 partenaires
utilisant ses outils, de 4,7 millions de pièces dans sa collection et de plus de 1 500
expositions organisées.
TRANSPARENCY INTERNATIONAL RÉVÈLE QUE GOOGLE EST LE DEUXIÈME PLUS
[34]
GRAND LOBBYISTE À BRUXELLES.

ALPHABET INC. EST CRÉÉ LE 2 OCTOBRE.

« Alphabet Inc. (connu sous le nom d'Alphabet) est un conglomérat multinational américain
créé en 2015 pour être la société mère de Google et de plusieurs entreprises appartenant
auparavant à Google ou y étant liées. »[35]
LE DOODLE PAUL OTLET ET LES EXPOSITIONS MUNDANEUM-GOOGLE.

Google crée un doodle pour sa page d'accueil à l'occasion du 147e anniversaire de Paul
Otlet[36] et des projections de diapositives Towards the Information Age, Mapping
Knowledge et The 100th Anniversary of a Nobel Peace Prize, toutes organisées par le
Google Cultural Institute.
« Le Mundaneum et Google ont étroitement collaboré pour organiser neuf expositions
en ligne exclusives pour le Google Cultural Institute. Cette année, l'équipe dans les
coulisses de la réouverture du Mundaneum a travaillé avec les ingénieurs du Cultural
[37]
Institute pour lancer une application mobile qui y est consacrée. »
LE GOOGLE CULTURAL INSTITUTE S'ASSOCIE AU BRITISH MUSEUM.

Le British Museum annonce un « partenariat unique » à travers lequel plus de 4 500 pièces
pourront être « visionnées en ligne en seulement quelques clics ». Dans le communiqué de
presse officiel, le directeur du musée, Neil McGregor, a déclaré « le monde a changé
aujourd'hui, notre manière d'accéder à l'information a été révolutionnée par la technologie
numérique. Cela permet de donner une nouvelle réalité à l'idéal des Lumières sur lequel le
Museum a été fondé. Il est à présent possible d'accéder à notre collection, d'explorer et de
profiter non seulement pour ceux qui la visitent en personne, mais pour tous ceux qui
disposent d'un ordinateur ou d'un appareil mobile. »[38]
LE GOOGLE CULTURAL INSTITUTE AJOUTE LA SECTION PERFORMING ARTS.

Plus de 60 organisations et interprètes d'art du spectacle (danse, théâtre, musique, opéra)
rejoignent la collection Google Cultural Institute[39]
2016

...
Last
Revision:
28·06·2016

1. Caines, Matthew. « Arts head: Amit Sood, director, Google Cultural Institute »The Guardian. 3 décembre 2013. http://
www.theguardian.com/culture-professionals-network/culture-professionals-blog/2013/dec/03/amit sood-google-culturalinstitute-art-project
2. Google Paris. Consulté le 22 décembre 2016 http://www.google.se/about/careers/locations/paris/

P.206

P.207

3. Schiller, Dan & Yeo, Shinjoung. « Powered By Google: Widening Access And Tightening Corporate Control. » (In Aceti, D.
L. (Éd.). Red Art: New Utopias in Data Capitalism: Leonardo Electronic Almanac, Vol. 20, No. 1. Londres : Goldsmiths
University Press. 2014): 48
4. Down, Maureen. « The Google Art Heist ». The New York Times. 12 septembre 2015 http://
www.nytimes.com/2015/09/13/opinion/sunday/the-google-art-heist.html
5. Schiller, Dan & Shinjoung Yeo. « Powered By Google: Widening Access And Tightening Corporate Control. », 48
6. 6. Schiller, Dan & Yeo, Shinjoung. « Powered By Google: Widening Access And Tightening Corporate Control. », 48
7. Davis, Heather & Turpin, Etienne, eds. Art in the Antropocene (Londres : Open Humanities Press. 2015), 7
8. Bush, Randy. Psg.com On techno-colonialism. (blog) 13 juin 2015. Consulté le 22 décembre 2015 https://psg.com/ontechnocolonialism.html
9. Starzmann, Maria Theresia. « Cultural Imperialism and Heritage Politics in the Event of Armed Conflict: Prospects for an
‘Activist Archaeology’ ». Archeologies. Vol. 4 n° 3 (2008):376
10. Echikson,William. Partnering in Belgium to create a capital of culture (blog) 10 mars 2014. Consulté le 22 décembre 2015
http://googlepolicyeurope.blogspot.se/2014/03/partnering-in-belgium-to-create-capital.html
11. Google. Mundaneum co-founder Paul Otlet's 147th Birthday (blog) 23 août, 2015. Consulté le 22 décembre 2015 http://
www.google.com/doodles/mundaneum-co-founder-paul-otlets-147th-birthday
12. ex. https://www.google.com/culturalinstitute/thelab/#experiments
13. 13. Lavallee, Andrew. « Google CEO: A New Iraq Means Business Opportunities. » Wall Street Journal. 24 novembre
2009 http://blogs.wsj.com/digits/2009/11/24/google-ceo-a-new-iraq-means-business-opportunities/
14. 14. Associated Press. Google Documents Iraqi Museum Treasures (vidéo en ligne 24 novembre 2009) https://
www.youtube.com/watch?v=vqtgtdBvA9k
15. Jarry, Emmanuel. « France's Sarkozy takes on Google in books dispute. » Reuters. 8 décembre 2009. http://
www.reuters.com/article/us-france-google-sarkozy-idUSTRE5B73E320091208
16. European Commission. Antitrust: Commission probes allegations of antitrust violations by Google (Bruxelles 2010) http://
europa.eu/rapid/press-release_IP-10-1624_en.htm
17. Caines, Matthew. “Arts head: Amit Sood, director, Google Cultural Institute »The Guardian. 3 décembre 2013. http://
www.theguardian.com/culture-professionals-network/culture-professionals-blog/2013/dec/03/amit sood google-culturalinstitute-art-project
18. Cyrus, Farivar. « Google to build R&D facility and 'European cultural center' in France. » Deutsche Welle. 9 septembre
2010. http://www.dw.com/en/google-to-build-rd-facility-and-european-cultural-center-in-france/a-5993560
19. 19. Google Art Project. Art Project V1 - Launch Event at Tate Britain. (vidéo en ligne le 1er février 2011) https://
www.youtube.com/watch?v=NsynsSWVvnM
20. Google Cultural Institute. Consulté le 18 décembre 2015. https://www.google.com/culturalinstitute/about/partners/
21. Robinson, James. « Eric Schmidt, chairman of Google, condemns British education system » The Guardian. 26 août 2011
http://www.theguardian.com/technology/2011/aug/26/eric-schmidt-chairman-google-education
22. European Commission. Letter addressed to Google by the Article 29 Group (Bruxelles 2012) http://ec.europa.eu/justice/
data-protection/article-29/documentation/other-document/files/2012/20121016_letter_to_google_en.pdf
23. Willsher, Kim. « Google Cultural Institute's Paris opening snubbed by French minister. » The Guardian. 10 décembre, 2013
http://www.theguardian.com/world/2013/dec/10/google-cultural-institute-france-minister-snub
24. 24. Bowers, Simon & Syal, Rajeev. « MP on Google tax avoidance scheme: 'I think that you do evil' ». The Guardian. 16
mai 2013. http://www.theguardian.com/technology/2013/may/16/google-told-by-mp you-do-do-evil
25. Court of Justice of the European Union. Press-release No 70/14 (Luxembourg, 2014) http://curia.europa.eu/jcms/upload/
docs/application/pdf/2014-05/cp140070en.pdf
26. Barbican. « Digital Revolution. » Consulté le 15 décembre 2015 https://www.barbican.org.uk/bie/upcoming-digital-revolution
27. Google. « Dev Art ». Consulté le 15 décembre 2015 https://devart.withgoogle.com/
28. Tekniska Museet. « Digital Revolution. » Consulté le 15 décembre 2015 http://www.tekniskamuseet.se/1/5554.html
29. Google Cultural Institute. Consulté le 15 décembre 2015. https://www.google.com/culturalinstitute/thelab/
30. Echikson,William. Partnering in Belgium to create a capital of culture (blog) 10 mars 2014. Consulté le 22 décembre 2015
http://googlepolicyeurope.blogspot.se/2014/03/partnering-in-belgium-to-create-capital.html
31. European Commission. Antitrust: Commission sends Statement of Objections to Google on comparison shopping service;
opens separate formal investigation on Android. (Bruxelles 2015) http://europa.eu/rapid/press-release_IP-15-4780_en.htm
32. Yun Chee, Foo. « Google rejects 'unfounded' EU antitrust charges of market abuse » Reuters. (27 août 2015) http://
www.reuters.com/article/us-google-eu-antitrust-idUSKCN0QW20F20150827
33. European Commission. Antitrust: Commission sends Statement

34. Transparency International. Lobby meetings with EU policy-makers dominated by corporate interests (blog) 24 juin 2015.
Consulté le mardi 22 décembre 2015. http://www.transparency.org/news/pressrelease/
lobby_meetings_with_eu_policy_makers_dominated_by_corporate_interests
35. Wikipedia, The Free Encyclopedia. s.v. “Alphabet Inc,” (consulté le 25 janvier 2016, https://en.wikipedia.org/wiki/
Alphabet_Inc.
36. Google. Mundaneum co-founder Paul Otlet's 147th Birthday (blog) 23 août, 2015. Consulté le 22 décembre 2015 http://
www.google.com/doodles/mundaneum-co-founder-paul-otlets-147th-birthday
37. Google. Mundaneum co-founder Paul Otlet's 147th Birthday
38. The British Museum. The British Museum’s unparalleled world collection at your fingertips. (blog) Novembre 12, 2015.
Consulté le mardi 22 décembre 2015. https://www.britishmuseum.org/about_us/news_and_press/press_releases/2015/
with_google.aspx
39. Sood, Amit. Step on stage with the Google Cultural Institute (blog) 1er décembre 2015. Consulté le mardi 22 décembre
2015. https://googleblog.blogspot.se/2015/12/step-on-stage-with-google-cultural.html

P.208

P.209

Special:Disambiguation
The following is a list of all disambiguation pages on Mondotheque.
A page is treated as a disambiguation page if it contains the tag __DISAMBIG__ (or an
equivalent alias).
Showing below up to 15 results in range #1 to #15.
View (previous 50 | next 50) (20 | 50 | 100 | 250 | 500)
1. Biblion may refer to:
◦ Biblion (category), a subcategory of the category: Index Traité de
documentation
◦ Biblion (Traité de documentation), term used by Paul
Otlet to define all categories of books and documents in a section of Traité de
documentation
◦ Biblion (unity), the smallest document or intellectual unit
2. Cultural Institute may refer to:
◦ A Cultural Institute (organisation) , such as The
Mundaneum Archive Center in Mons
◦ Cultural Institute (project), a critical interrogation of
cultural institutions in neo-liberal times, developed by amongst others
Geraldine Juárez
◦ The Google Cultural Institute, a project offering
"Technologies that make the world’s culture accessible to anyone, anywhere."
3. L'EVANGELISTE may refer to:
◦ Vint Cerf, so-called 'internet evangelist', or 'father of the internet',
working at LA MÉGA-ENTREPRISE
◦ Jiddu Krishnamurti, priest at the 'Order of the Star', a theosophist
splinter group that Paul Otlet related to
◦ Sir Tim Berners Lee, 'open data evangelist', heading the World Wide
Web consortium (W3C)

4. L'UTOPISTE may refer to:
◦ Paul Otlet, documentalist, universalist, internationalist, indexalist. At
times considered as the 'father of information science', or 'visionary inventor of
the internet on paper'
◦ Le Corbusier, architect, universalist, internationalist. Worked with Paul
Otlet on plans for a City of knowledge
◦ Otto Neurath , philosopher of science, sociologist, political economist.
Hosted a branch of Mundaneum in The Hague
◦ Ted Nelson , technologist, philosopher, sociologist. Coined the terms
hypertext, hypermedia, transclusion, virtuality and intertwingularity
5. LA CAPITALE may refer to:
◦ Brussels, capital of Flanders and Europe
◦ Genève , world civic center
6. LA MANAGER may refer to:
◦ Delphine Jenart, assistant director at the Mundaneum Archive Center
in Mons.
◦ Bill Echikson, former public relations officer at Google, coordinating
communications for the European Union, and for all of Southern, Eastern
Europe, Middle East and Africa. Handled the company’s high profile
antitrust and other policy-related issues in Europe.
7. LA MÉGA-ENTREPRISE may refer to:
◦ Google inc, or Alphabet, sometimes referred to as "Crystal
Computing", "Project02", "Saturn" or "Green Box Computing"
◦ Carnegie Steel Company, supporter of the Mundaneum in Brussels
and the Peace Palace in The Hague
8. LA RÉGION may refer to:
◦ Wallonia (Belgium), or La Wallonie. Former mining area, homebase of former prime minister Elio di Rupo, location of two Google
datacenters and the Mundaneum Archive Center
◦ Groningen (The Netherlands), future location of a Google data
center in Eemshaven
◦ Hamina (Finland), location of a Google data center

P.210

P.211

9. LE BIOGRAPHE is used for persons that are instrumental in constructing the
narrative of Paul Otlet. It may refer to:
◦ André Canonne, librarian and director of the Centre de Lecture
publique de la Communauté française (CLPCF). Discovers the
Mundaneum in the 1960s. Publishes a facsimile edition of the Traité de
documentation (1989) and prepares the opening of Espace Mundaneum in
Brussels at Place Rogier (1990)
◦ Warden Boyd Rayward, librarian scientist, discovers the Mundaneum
in the 1970s. Writes the first biography of Paul Otlet in English: The
Universe of Information: the Work of Paul Otlet for Documentation and
international Organization (1975)
◦ Benoît Peeters and François Schuiten , comics-writers and
scenographers, discover the Mundaneum in the 1980s. The archivist in the
graphic novel Les Cités Obscures (1983) is modelled on Paul Otlet
◦ Françoise Levie, filmmaker, discovers the Mundaneum in the 1990s.
Author of the fictionalised biography The man who wanted to classify the
world (2002)
◦ Alex Wright, writer and journalist, discovers the Mundaneum in 2003.
Author of Cataloging the World: Paul Otlet and the Birth of the Information
Age (2014)
10. LE DIRECTEUR may refer to:
◦ Harm Post, director of Groningen Sea Ports, future location of a Google
data center
◦ Andrew Carnegie, director of Carnegy Steel Company, sponsor of the
Mundaneum
◦ André Canonne, director of the Centre de Lecture publique de la
Communauté française (CLPCF) and guardian of the Mundaneum. See
also: LE BIOGRAPHE
◦ Jean-Paul Deplus, president of the current Mundaneum association,
but often referred to as LE DIRECTEUR
◦ Amid Sood, director (later 'founder') of the Google Cultural Institute and
Google Art Project
◦ Steve Crossan, director (sometimes 'founder' or 'head') of the Google
Cultural Institute
11. LE POLITICIEN may refer to:
◦ Elio di Rupo, former prime minister of Belgium and mayor of Mons

◦ Henri Lafontaine, Belgium lawyer and statesman, working with Paul
Otlet to realise the Mundaneum
◦ Nicolas Sarkozy, former president of France, negotiating deals with
LA MÉGA-ENTREPRISE
12. LE ROI may refer to:
◦ Leopold II, reigned as King of the Belgians from 1865 until 1909.
Exploited Congo as a private colonial venture. Patron of the Mundaneum
project
◦ Albert II, reigned as King of the Belgians from 1993 until his
abdication in 2013. Visited LA MÉGA-ENTREPRISE in 2008
13. Monde may refer to:
◦ Monde (Univers) means world in French and is used in many
drawings and schemes by Paul Otlet. See for example: World + Brain and
Mundaneum
◦ Monde (Publication), Essai d'universalisme. Last book published
by Paul Otlet (1935)
◦ Mondialisation , Term coined by Paul Otlet (1916)
14. Mundaneum may refer to:
◦ Mundaneum (Utopia) , a project designed by Paul Otlet and Henri
Lafontaine
◦ Mundaneum (Archive Centre) , a cultural institution in Mons,
housing the archives of Paul Otlet and Henri Lafontaine since 1993
15. Urbanisme may refer to:
◦ Urban planning, a technical and political process concerned with the
use of land, protection and use of the environment, public welfare, and the
design of the urban environment, including air, water, and the infrastructure
passing into and out of urban areas such as transportation, communications,
and distribution networks.
◦ Urbanisme (Publication), a book by Le Corbusier (1925).
View (previous 50 | next 50) (20 | 50 | 100 | 250 | 500)

P.212

P.213

Location,
location,
location

From
Paper
Mill to
Google
Data
Center
SHINJOUNG YEO

Every second of every day, billions of people around the world are googling, mapping, liking,
tweeting, reading, writing, watching, communicating, and working over the Internet.
According to Cisco, global Internet traffic will surpass one zettabyte – nearly a trillion
gigabytes! – in 2016, which equates to 667 trillion feature-length films.[1] Internet traffic is
expected to double by 2019[2] as the internet ever increasingly weaves itself into the very
fabric of many people’s daily lives.
Internet search giant Google – since August, 2015 a subsidiary of Alphabet Inc.[3] – is one
of the major conduits of our social activities on the Web. It processes over 3.3 billion
searches each and every day, 105 billon searches per month or 1.3 trillion per year,[4] and is
responsible for over 88% Internet search activity around the globe.[5] Predicating its business
on people’s everyday information activity – search – in 2015, Google generated $74.54
billion dollars,[6] equivalent to or more than the GDP of some countries. The vast majority of
Google’s revenue – $ 67.39 billion dollars[7] – from advertising on its various platforms
including Google search, YouTube, AdSense products, Chrome OS, Android etc.; the
company is rapidly expanding its business to other sectors like cloud services, health,
education, self-driving cars, internet of things, life sciences, and the like. Google’s lucrative
internet business does not only generate profits. As Google’s chief economist Hal Varian
states:
…it also generates torrents of data about users’ tastes and habits, data that Google
then sifts and processes in order to predict future consumer behavior, find ways to
improve its products, and sell more ads. This is the heart and soul of Googlenomics.
It’s a system of constant self-analysis: a data-fueled feedback loop that defines not only
[8]
Google’s future but the future of anyone who does business online.

P.214

P.215

Google’s business model is emblematic of the “new economy” which is primarily built around
data and information. The “new economy” – the term popularized in the 1990s during the
first dot-com boom – is often distinguished by the mainstream discourse from the traditional
industrial economy that demands large-scale investment of physical capital and produces
material goods and instead emphasizes the unique nature of information and purports to be
less resource-intensive. Originating in the 1960s, post-industrial theorists asserted the
emergence of the “new” economy, claiming that the increase of highly-skilled information
workers, widespread application of information technologies, along with the decrease of
manual labor, would bring a new mode of production and fundamental changes in
exploitative capitalist social relations.[9]
Has the “new” economy challenged capitalist social relations and transcended the material
world? Google and other Internet companies have been investing heavily in industrial-scale
real estate around the world and continue to build large-scale physical infrastructure in the
way of data centers where the world’s bits and bytes are stored, processed and delivered.
The term “tube” or “cloud” or “weightless” often gives us a façade that our newly marketed
social and cultural activities over the Internet transcend the physical realm and occur in the
vapors of the Internet; far from this perception, however, every bit of information in the “new
economy” is transmitted through and located in physical space, on very real and very large
infrastructure encomapssing existing power structures from phone lines and fiber optics to
data centers to transnatonal underseas telecommunication cables.
There is much boosterism and celebration that the “new economy” holds the keys to
individual freedom, liberty and democratic participation and will free labor from exploitation;
however, the material/physical base that supports the economy and our everyday lives tells a
very different story. My analysis presents an integral piece of the physical infrastructure
behind the “new economy” and the space embedded in that infrastructure in order to
elucidate that the “new economy” does not occur in an abstract place but rather is manifested
in the concrete material world, one deeply embedded in capitalist development which
reproduces structural inequality on a global scale. Specifically, the analysis will focus on
Google’s growing large-scale data center infrastructure that is restructuring and reconfiguring
previously declining industrial cities and towns as new production places within the US and
around the world.
Today, data centers are found in nearly every sector of the economy: financial services,
media, high-tech, education, retail, medical, government etc. The study of the development of
data centers in each of these sectors could be separate projects in and of themselves;
however, for this project, I will only look at Google as a window into the “new” economy, the

company which has led the way in the internet sector in building out and linking up data
centers as it expands its territory of profit.[10]
DATA CENTRES IN CONTEXT

The concepts of “spatial fix” by critical geographer David Harvey[11] and “digital capitalism”
by historian of communication and information Dan Schiller[12] are useful to contextualize and
place the emergence of large-scale data centers within capitalist development. Harvey
illustrates the notion of spatial fix to explicate and situate the geographical dynamics and crisis
tendency of capitalism with over-accumulation and under-consumption. Harvey’s spatial fix
has dual meanings. One meaning is that it is necessary for capital to have a fixed space –
physical infrastructure (transportation, communications, highways, power etc.) as well as a
built environment – in order to facilitate capital’s geographical expansion. The other meaning
is a fix or solution for capitalists’ crisis through geographical expansion and reorganization of
space as capital searches for new markets and temporarily relocates to more profitable space
– new accumulation sites and territories. This temporal spatial fix will lead capital to leave
behind existing physical infrastructure and built environments as it shifts to new temporal
fixed spaces in order to cultivate new markets.
Building on Harvey’s work, Schiller introduced the concept of digital capitalism in response
to the 1970’s crisis of capitalism in which information became that “spatial-temporal fix” or
“pole of growth.”[13] To renew capitalist crisis from the worst economic downturn of the
1970s, a massive amount of information and communication technologies were introduced
across the length and breadth of economic sectors as capitalism shifted to a more informationintensive economy – digital capitalism. Today digital capitalism grips every sector, as it has
expanded and extended beyond information industries and reorganized the entire economy
from manufacturing production to finance to science to education to arts and health and
impacts every iota of people’s social lives.[14] Current growth of large-scale data centers by
Internet companies and their reoccupation of industrial towns needs to be situated within the
context of the development of digital capitalism.
FROM MANUFACTURING FACTORY TO DATA FACTORY

Large-scale data centers – sometimes called “server farms” in an oddly quaint allusion to the
pre-industrial agrarian society – are centralized facilities that primarily contain large numbers
of servers and computer equipment used for data processing, data storage, and high-speed
telecommunications. In a sense, data centers are similar to the capitalist factory system; but
instead of a linear process of input of raw materials to
output of material goods for mass consumption, they input
mass data in order to facilitate and expand the endless
cycle of commodification – an Ouroboros-like machine.
As the factory system enables the production of more

P.216

From X = Y:
In these proposals, Otlet's archival

P.217

goods at a lower cost through automation and control of labor to maximize profit, data centers
have been developed to process large quantities of bits and bytes as fast as possible and at as
low a cost as possible through automation and centralization. The data center is a hyperautomated digital factory system that enables the operation of hundreds of thousands of
servers through centralization in order to conduct business around the clock and around the
globe. Compared to traditional industrial factories that produce material goods and generally
employ entire towns if not cities, large-scale data centers each generally employ fewer than
100 full-time employees – most of these employees are either engineers or security guards.
In a way, data centers are the ultimate automated factory. Moreover, the owner of a
traditional factory needs to acquire/purchase/extract raw materials to produce commodities;
however, much of the raw data for a data center are freely drawn from the labor and
everyday activities of Internet users without a direct cost to the data center. The factory
system is to industrial capitalism what data centers are becoming to digital capitalism.
THE GROWTH OF GOOGLE’S DATA FACTORIES

Today, there is a growing arms race among leading Internet companies – Google, Microsoft,
Amazon, Facebook, IBM – in building out large-scale data centers around the globe.[16]
Among these companies, Google has so far been leading in terms of scale and capital
investment. In 2014, the company spent $11 billion for real estate purchases, production
equipment, and data center construction,[17] compared to Amazon which spent $4.9 billion
and Facebook with $1.8 billion in the same year.[18]
Until 2002, Google rented only one collocation facility in Santa Clara, California to house
about 300 servers.[19] However, by 2003 the company had started to purchase entire
collocation buildings that were cheaply available due to overexpansion during the dot.com
era. Google soon began to design and build its own data centers containing thousands of
custom-built servers as Google expanded its services and global market and responded to
competitive pressures. Initially, Google was highly secretive about its data center locations
and related technologies; a former Google employee called this Google’s “Manhattan
project.” However, in 2012, Google began to open up its data centers. While this seems
like Google’s had a change of heart and wants to be more transparent about their data
centers to the public, it is in reality more about Google’s self-serving public relations
onslaught to show how its cloud infrastructure is superior to Google’s competitors and to
secure future cloud clients.[20]
As of 2016, Google has data centers in 14 locations around the globe – eight in Americas,
two in Asia and four in Europe – with an unknown number of collocated centers – ones in
which space, servers, and infrastructure are shared with other companies – in undisclosed
locations. The sheer size of Google’s data centers is reflected in its server chip consumption.
In all, Google supposedly accounts for 5% of all server chips sold in the world,[21] and it is
even affecting the price of chips as the company is one of biggest chip buyers. Google’s
recent allying with Qualcomm for its new chip has become a threat to Intel – Google has

been the largest customer of the world’s largest chip maker for quite some time.[22] According
to Steven Levy, Google admitted that, “it is the largest computing manufacturer in the world
– making its own servers requires it to build more units every year than the industry giants
HP, Dell, and Lenovo.”[23] Moreover, Google has been amassing cheap “dark fibre” – fibre
optic cables that were laid down during the 1990s dot.com boom by now-defunct telecom
firms betting on increased internet traffic[24] - constructing Google’s fibre optic cables in US
cities,[25] and investing in building massive undersea cables to maintain its dominance and
expand its markets by controlling Internet infrastructure.[26]
With its own customized servers and software, Google is building a massive data center
network infrastructure, delivering its service at unprecedented speeds around the clock and
around the world. According to one report, Google’s global network of data centers, with a
capacity to deliver 1-petabit-per-second bandwidth, is powerful enough to read all of the
scanned books in the Library of Congress in a fraction of a second.[27] New York Times
columnist Pascal Zachary once reported:
…I believe that the physical network is Google’s “secret sauce,” its premier competitive
advantage. While a brilliant lone wolf can conceive of a dazzling algorithm, only a
super wealthy and well-managed organization can run what is arguably the most
valuable computer network on the planet. Without the computer network, Google is
[28]
nothing.

Where then is Google’s secret sauce physically located? Despite its massiveness, Google’s
data center infrastructure and locations have been invisible to millions of everyday Google
users around the globe – users assume that Google is ubiquitous, the largest cloud in the
‘net.’ However, this infrastructure is no longer unnoticed since the infrastructure needed to
support the “new economy” is beginning to occupy and transform our landscapes and
building a new fixed network of global digital production space.
NEW NETWORK OF DIGITAL PRODUCTION SPACE:
RESTRUCTURING INDUSTRIAL CITIES

While Google’s data traffic and exchange extends well beyond geographic boundaries, its
physical plants are fixed in places where digital goods and services are processed and
produced. For the production of material goods, access to cheap labor has long been one of
the primary criteria for companies to select their places of production; but for data centers, a
large quantity of cheap labor is not as important since they require only a small number of
employees. The common characteristics necessary for data center sites have so far been:
good fiber-optic infrastructure; cheap and reliable power sources for cooling and running
servers, geographical diversity for redundancy and speed, cheap land, and locations close to
target markets.[29] Today, if one finds geographical areas in the world with some combination
of these factors, there will likely be data centers there or in the planning stages for the near
future.

P.218

P.219

Given these criteria, there has been an emerging trend of reconfiguration and conversion to
data centers of former industrial sites such as paper mills, printing plants, steel plants, textile
mills, auto plants, aluminum plants and coal plants. In the United States, and in particular rust
belt regions of the upper Northeast, Great Lakes and Midwest regions – previously hubs of
manufacturing industries and heart lands of both industrial capitalism and labor movements –
are turning (or attempting to turn) into hotspots for large-scale data centers for Internet
companies.[30] These cities are the remains of past crises of industrial capitalism as well as of
long labor struggles.
The reasons that former industrial sites in the US and other parts of the world are attractive
for data center conversion is that, starting in the 1970s, many factories had closed or moved
their operations overseas in search of ever-cheaper labor and concomitantly weak or
nonexistent labor laws, leaving behind solid physical plants and industrial infrastructures of
power, water and cooling systems once used to drive industrial machines and production lines
and now perfectly fit for data center development.[31] Especially, finding cheap energy is
crucial for companies like Google since data center energy costs are a major expenditure.
Moreover, many communities surrounding former industrial sites have struggled and become
distressed with increasing poverty, high unemployment and little labor power. Thus, under
the guise of “economic development,” many state and local governments have been eager to
lure data centers by offering lavish subsidies for IT companies. For at least the last five years,
state after state has legislated tax breaks for data centers and about a dozen states have
created customized incentives programs for data center operations.[32] State incentives range
from full or partial exemptions of sales/use taxes on equipment, construction materials, and in
some cases purchases of electricity and backup fuel.[33] This kind of corporate-centric
economic development is far from the construction of democratic cities that prioritize social
needs and collective interests, and reflects the environmental and long-term sustainability of
communities; but rather the goal is to, “create a good business climate and therefore to
optimize conditions for capital accumulation no matter what the consequences for
employment or social and environmental well-being.”[34]
Google’s first large-scale data center site is located in one of these struggling former industrial
towns. In 2006, Google opened its first data center in The Dalles – now nicknamed
Googleville – a town of a little over 15,000 located alongside the Columbia River and
about 80 miles east of Portland, Oregon. It is an ideal site in the sense that it is close to a
major metropolitan corridor (Seattle-Tacoma-Portland) to serve business interests and large
urban population centers; yet, cheap land, little organized labor, and the promise of cheap
electrical power from the Bonneville Power Administration, a federal governmental agency,
as well as a 15-year property tax exemption. In addition, The Dalles had already built a
fiber-optic loop as part of its economic development hoping to attract the IT industry.[35]
Not long ago, the residents of The Dalles and communities up and down the Columbia
River gorge relied on the aluminum industry, an industry which required massive amounts of

– in this case hydroelectric – power. Energy makes up 40 percent of the cost of aluminum
production[36] and was boosted by the war economies of World War II and the Korean war
as aluminum was used for various war products, especially aircraft. However, starting in
1980, aluminum smelter plants began to close and move out of the area, laid off their
workers and left their installed infrastructure behind.
Since then, The Dalles, like other industrial towns, has suffered from high unemployment,
poverty, aging population and budget-strapped schools, etc. Thus, the decision for Google to
build a data center the size of two football fields (68,680-square-foot storage buildings) in
order to take advantage of the preinstalled fiber optic infrastructure, relatively cheap
hydropower from the Dalles Dam, and tax benefits was presented as the new hope
for the
[37]
distressed town and a large employment opportunity for the town’s population.
There was much community excitement that Google’s arrival would mean an economic
revival for the struggling city and a better life for the poor , but no one could discuss about it
at the time of negotiations with Google because local officials involved in negotiations had all
signed nondisclosure agreements (NDAs);[38] they were required not to mention Google in
any way but were instead instructed to refer to the project as “Project 02.”[39] Google insisted
that the information it shared with representatives of The Dalles not be subject to public
records disclosures.[40] While public subsidies were a necessary precondition of building the
data center,[41] there were no transparency or open public debates on alternative visions of
development that reflects collective community interests.
Google’s highly anticipated data center in The Dalles opened in 2006, but it “opened” only
in the sense that it became operational. To this day, Google’s data center site is off-limits to
the community and is well-guarded, including multiple CCTV cameras which survey the
grounds around the clock. Google might boast of its corporate culture as “open” and “nonhierarchical” but this does not extend to the data centers within the community where Google
benefits as it extracts resources. Not only was the building process secretive, but access to the
data center itself is highly restricted. Data centers are well secured with several guards, gates
and checkpoints. Google’s data center has reshaped the landscape into a pseudo-militarized
zone as it is not far off from a top-secret military compound – access denied.
This kind of landscape is reproduced in other parts of the US as well. New data center hubs
have begun to emerge in other rural communities; one of them is in southwestern North
Carolina where the leading tech giants – Google, Facebook, Apple, Disney and American
Express – have built data centers in close proximity to each other. The cluster of data
centers is referred to as the “NC Data Center Corridor,”[42] a neologism used to market the
area.
At one time, the southwestern part of North Carolina had heavy concentration of highly
labor-intensive textiles and furniture industries that exploited the region’s cheap labor supply
and where workers fought long and hard for better working conditions and wages. However,
over the last 25 years, factories have closed and slowly moved out of the area and been

P.220

P.221

relocated to Asia and Latin America.[43] As a result – and mirroring the situation in The
Dalles – the area has suffered a series of layoffs, chronically high unemployment rates and
poverty, but now is being rebranded as a center of the “new economy” geared toward
attracting high-tech industries. For many towns, abandoned manufacturing plants are no
longer an eyesore but rather are becoming major selling points to the IT industry. Rich
Miller, editor of Data Center Knowledge, stated, “one of the things that’s driving the
competitiveness of our area is the power capacity built for manufacturers in the past 50
years.”[44]
In 2008, Google opened a $600 million data center in Lenoir, NC, a town in Caldwell
County (population 18,228[45]). Lenoir was once known as the furniture capital of the South
but lost 1,120 jobs in 2006.[46] More than 300,000 furniture jobs moved away from the
United States during 2000 as factories relocate to China for cheaper labor and operational
costs.[47] In order to lure Google, Caldwell County and the City of Lenoir gave Google a
100 percent waiver on business property taxes, an 80 percent waiver on real estate property
taxes over the next 30 years,[48] and various other incentives. Former NC Governor Mike
Easley announced that, “this company will provide hundreds of good-paying, knowledgebased jobs that North Carolina’s citizens want;”[49] yet, he addressed neither the cost of
attracting Google for taxpayers – including those laid-off factory workers – nor the
environmental impact of the data center. In 2013, Google expanded its operation in Lenoir
with an additional $600 million investment, and as of 2015, it has 250 employees in its
220-plus acre data center site.[50]
The company continues its crusade of giving “hope” to distressed communities and now
“saving” the environment from the old coal-fueled industrial economy. Google’s latest project
in the US is in Widows Creek, Alabama where the company is converting a coal burning
power plant commissioned in 1952 – which has been polluting the area for years – to its 14
th data center powered by renewable power. Shifting from coal to renewable energy seems to
demonstrate how Google has gone “green” and is being a different kind of corporation that
cares for the environment. However, this is a highly calculated business decision given that
relying on renewable energy is more economical over the long term than coal – which is
more volatile as commodity prices greatly fluctuate.[51] Google is gobbling up renewable
energy deals around the world to procure cheap energy and power its data centers.[52]
However, Google’s “green” public relations also camouflage environmental damages that are
brought by the data center’s enormous power consumption, e-waste from hardware, rare
earth mining and the environmental damage over the entire supply chain.[53]
The trend of reoccupation of industrial sites by data centers is not confined to the US.
Google’s Internet business operates across territories and more than 50% of its revenues
come from outside the US. As Google’s domestic search market share has stabilized at
around 60% share, the company has aggressively moved to build data centers around the
world for its global expansion. One of Google’s most ambitious data center projects outside
the US was in Hamina, Finland where Google converted a paper mill to a data center.

In 2008, Stora Enso, the Finnish paper maker, in which the Finnish Government held 16%
of the company’s shares and controlled 34% of the company, shut down its Summa paper
mill on the site close to the city of Hamina in Southeastern Finland despite workers’
resistance against the closure.[54] The company shed 985 jobs including 485 from the
Summa plant.[55] Shortly after closing the plant, Stora Enso sold the 53 year-old paper mill
site to Google for roughly $52 million which included 410 acres of land and the paper mill
and its infrastructure itself.
Whitewashing the workers’ struggles, the Helsinki Times reported that, “everyone was
excited about Google coming to Finland. The news that the Internet giant had bought the old
Stora Enso mill in Hamina for a data centre was great news for a community stunned by job
losses and a slowing economy.”[56] However, the local elites recognized that jobs created by
Google would not drastically affect the city’s unemployment rate or alleviate the economic
plight for many people in the community, so they justified their decision by arguing that
connecting Google’s logo to the city’s image would result in increased investments in the
area.[57] The facility had roughly 125 full-time employees when Google announced its
Hamina operation’s expansion in 2013.[58] The data center is monitored by Google’s
customary CCTV cameras and motion detectors; even Google staff only have access to the
server halls after passing biometric authentication using iris recognition scanners.[59]
Like Google’s other data centers, Google’s decision to build a data center in Hamina is not
merely because of favorable existing infrastructure or natural resources. The location of
Hamina as its first Nordic data center is vital and strategic in terms of extending Google’s
reach into geographically dispersed markets, speed and management of data traffic. Hamina
is located close to the border with Russia and the area has long been known for good
Internet connectivity via Scandinavian telecommunications giant TeliaSonera, whose services
and international connections run right through the area of Hamina and reach into Russia as
well as to Sweden and Western Europe.[60] Eastern Europe has a growing Internet market
and Russia is one of the few countries where Google does not dominate the search market.
Yandex, Russia’s native language search engine, controls the Russian search market with
over 60% share.[61] By locating its infrastructure in Hamina, Google is establishing its
strategic global digital production beach-head for both the Nordic and Russian markets.
As Google is trying to maintain its global dominance and expand its business, the company
has continued to build out its data center operations on European soil. Besides Finland,
Google has built data centers in Dublin, Ireland, and St. Ghislain and Mons in Belgium,
which respectively had expanded their operations after their initial construction. However,
the stories of each of these data centers is similar: aluminum smelting plant town The Dalles,
Oregon and Lenoir North Carolina in the US, paper mill town Hamina, Finland, coalmining town Ghislain–Mons, Belgium and a warehouse converted data center in Dublin,
Ireland. Each of these were once industrial production sites and/or sites for the extraction of
environmental resources turned into data centers creating temporal production spaces to
accelerate digital capitalism. Google’s latest venture in Europe is in a seaport town of

P.222

P.223

Eemshaven, Netherlands which hosts several power stations as well as the transatlantic fiberoptic cable which links the US and Europe.
To many struggling communities around the world, the building of Google’s large-scale data
centers has been presented by the company and by political elites as an opportunity to
participate in the “new economy” – as well as a veiled threat of being left behind from the
“new economy” – as if this would magically lead to the creation of prosperity and equality. In
reality, these cities and towns are being reorganized and reoccupied for corporate interests,
re-integrated into sites of capital accumulation and re-emerged as new networks of production
for capitalist development.
CONCLUSION

Is the current physical landscape that supports the “new economy” outside of capitalist social
relations? Does the process of the redevelopment of struggling former industrial cites by
building Google data centers under the slogan of participation in the “new economy” really
meet social needs, and express democratic values? The “new economy” is boasted about as if
it is radically different from past industrial capitalist development, the solution to myriad social
problems that hold the potential for growth outside of the capitalist realm; however, the “new
economy” operates deeply within the logic of capitalist development – constant technological
innovation, relocation and reconstruction of new physical production places to link
geographically dispersed markets, reduction of labor costs, removal of obstacles that hinder its
growth and continuous expansion. Google’s purely market-driven data centers illustrate that
the “new economy” built on data and information does not bypass physical infrastructures and
physical places for the production and distribution of digital commodities. Rather, it is firmly
anchored in the physical world and simply establishes new infrastructures on top of existing
industrial ones and a new network of production places to meet the needs of the processes of
digital commodities at the expense of environmental, labor and social well-being.
We celebrate the democratic possibilities of the “networked information economy” providing
for alternative space free from capitalist practices; however, it is vital to recognize that this
“new economy” in which we put our hopes is supported by, built on, and firmly planted in
our material world. The question that we need to ask ourselves is: given that our communities
and physical infrastructures continue to be configured to assist the reproduction of the social
relations of capitalism, how far can our “new economy” deliver on the democracy and social
justice for which we all strive?
Last
Revision:
3·07·2016

1. James Titcomb, “World’s internet traffic to surpass one zettabyte in 2016,” Telegraph, February 4, 2016, http://
www.telegraph.co.uk/technology/2016/02/04/worlds-internet-traffic-to-surpass-one-zettabyte-in-2016/
2. Ibid.

3. Cade Metz, “A new company called Alphabet now owns Google,” Wired, August 10, 2015. http://wired.com/2015/08/
new-company-called-alphabet-owns-google/.
4. Google hasn’t released new data since 2012, but the data extrapolate from based on Google annual growth date. See Danny
Sullivan, “Google Still Doing At Least 1 Trillion Searches Per Year,” Search Engine Land, January 16, 2015, http://
searchengineland.com/google-1-trillion-searches-per-year-212940
5. This is Google’s desktop search engine market as of January 2016. See “Worldwide desktop market share of leading search
engines from January 2010 to January 2016,” Statista, http://www.statista.com/statistics/216573/worldwide-market-shareof-search-engines/.
6. “Annual revenue of Alphabet from 2011 to 2015 (in billions of US dollars),” Statista, http://www.statista.com/
statistics/507742/alphabet-annual-global-revenue/.
7. “Advertising revenue of Google from 2001 to 2015 (in billion U.S. dollars),” Statista, http://www.statista.com/
statistics/266249/advertising-revenue-of-google/.
8. Seven Levy, “Secret of Googlenomics: Data-Fueled Recipe Brews Profitability,” Wired, May 22, 2009, http://
www.wired.com/culture/culturereviews/magazine/17-06/nep_googlenomics?currentPage=all.
9. Daniel Bell, The Coming of Post-Industrial Society: A Venture In Social Forecasting (New York: Basic Books, 1974); Alvin
Toffler, The Third Wave (New York: Morrow, 1980).
10. The term “territory of profit” is borrowed from Gary Fields’ book titled Territories of Profit: Communications, Capitalist
Development, and the Innovative Enterprises of G. F. Swift and Dell Computer (Stanford University Press, 2003)
11. David Harvey, Spaces of capital: towards a critical geography (New York: Routledge, 2001)
12. Top of FormDan Schiller, Digital capitalism networking the global market system (Cambridge, Mass: MIT Press, 1999).
13. Dan Schiller, “Power Under Pressure: Digital Capitalism In Crisis,” International Journal of Communication 5 (2011): 924–
941
14. Dan Schiller, “Digital capitalism: stagnation and contention?” Open Democracy, October 13, 2015, https://
www.opendemocracy.net/digitaliberties/dan-schiller/digital-capitalism-stagnation-and-contention.
15. Ibid: 113-117.
16. Jason Hiner, “Why Microsoft, Google, and Amazon are racing to run your data center.” ZDNet, June 4, 2009, http://
www.zdnet.com/blog/btl/why-microsoft-google-and-amazon-are-racing-to-run-your-data-center/19733.
17. Derrick Harris, “Google had its biggest quarter ever for data center spending. Again,” Gigaom, February 4, 2015, https://
gigaom.com/2015/02/04/google-had-its-biggest-quarter-ever-for-data-center-spending-again/.
18. Ibid.
19. Steven Levy, In the plex: how Google thinks, works, and shapes our lives )New York: Simon & Schuster, 2011), 182.
20. Steven Levy, “Google Throws Open Doors to Its Top-Secret Data Center,” Wired, October 17 2012, http://
www.wired.com/2012/10/ff-inside-google-data-center/.
21. Cade Metz, “Google’s Hardware Endgame? Making Its Very Own Chips,” Wired, February 12, 2016, http://
www.wired.com/2016/02/googles-hardware-endgame-making-its-very-own-chips/.
22. Ian King and Jack Clark, “Qualcomm's Fledgling Server-Chip Efforts,” Bloomberg Business, February 3, 2016, http://
www.bloomberg.com/news/articles/2016-02-03/google-said-to-endorse-qualcomm-s-fledgling-server-chip-efforts-ik6ud7qg.
23. Levy, In the Plex, 181.
24. In 2013, Wall Street Journal reported that Google controls more than 100,000 miles of routes around the world which was
considered bigger than US-based telecom company Sprint. See Drew FitzGerald and Spencer E. Ante, “Tech Firms Push to
Control Web's Pipes,” Wall Street Journal, December 13, 2016, http://www.wsj.com/articles/
SB10001424052702304173704579262361885883936
25. Google is offering its gigabit-speed fiber optic Internet service in 10 US cities. Since Internet service is a precondition of
Google’s myriad Internet businesses, Google’s strategy is to control the pipes rather than relying on telecom firms. See Mike
Wehner, “Google Fiber is succeeding and cable companies are starting to feel the pressure,” Business Insider, April 15, 2015,
http://www.businessinsider.com/google-fiber-is-succeeding-and-cable-companies-are-starting-to-feel-the-pressure-2015-4;
Ethan Baron, “Google Fiber coming to San Francisco first,” San Jose Mercury News, February 26, 2016, http://
www.mercurynews.com/business/ci_29556617/sorry-san-jose-google-fiber-coming-san-francisco.
26. Tim Hornyak, “9 things you didn't know about Google's undersea cable,” Computerworld, July 14, 2015, http://
www.computerworld.com/article/2947841/network-hardware-solutions/9-things-you-didnt-know-about-googles-underseacable.html
27. Jaikumar Vijayan, “Google Gives Glimpse Inside Its Massive Data Center Network,” eWeek, June 18, 2015, http://
www.eweek.com/servers/google-gives-glimpse-inside-its-massive-data-center-network.html
28. Pascal Zachary, “Unsung Heroes Who Move Products Forward,” New York Times, September 30, 2007, http://
www.nytimes.com/2007/09/30/technology/30ping.html

P.224

P.225

29. Tomas Freeman, Jones Lang, and Jason Warner, “What’s Important in the Data Center Location Decision,” Spring 2011,
http://www.areadevelopment.com/siteSelection/may2011/data-center-location-decision-factors2011-62626727.shtml
30. “From rust belt to data center green?” Green Data Center News, February 10, 2011, http://www.greendatacenternews.org/
articles/204867/from-rust-belt-to-data-center-green-by-doug-mohney/
31. Rich Miller, “North Carolina Emerges as Data Center Hub,” Data Center Knowledge, November 7, 2010, http://
www.datacenterknowledge.com/archives/2010/11/17/north-carolina-emerges-as-data-center-hub/.
32. David Chernicoff, “US tax breaks, state by state,” Datacenter Dynamics, January 6, 2016, http://
www.datacenterdynamics.com/design-build/us-tax-breaks-state-by-state/95428.fullarticle; Case Study: Server Farms,” Good
Job First, http://www.goodjobsfirst.org/corporate-subsidy-watch/server-farms.
33. John Leino, “The role of incentives in Data Center Location Decisions,” Critical Environment Practice, February 28, 2011,
http://www.cbrephoenix.com/wp_eig/?p=68.
34. David, Harvey, Spaces of global capitalism (London: Verso. 2006), 25.
35. Marsha Spellman, “Broadband, and Google, Come to Rural Oregon,” Broadband Properties, December 2005, http://
www.broadbandproperties.com/2005issues/dec05issues/spellman.pdf.
36. Ross Courtney “The Goldendale aluminum plant -- The death of a way of life,” Yakima Herald-Republic,” April 9, 2011,
http://www.yakima-herald.com/stories/2011/4/9/the-goldendale-aluminum-plant-the-death-of-a-way-of-life.
37. Ginger Strand, “Google’s addition to cheap electricity,” Harper Magazine, March 2008, https://web.archive.org/web/20080410194348/http://harpers.org/media/
slideshow/annot/2008-03/index.html.

38. Linda Rosencrance, “Top-secret Google data center almost completed,” Computerworld, June 16, 2006, http://
www.computerworld.com/article/2546445/data-center/top-secret-google-data-center-almost-completed.html.
39. Bryon Beck, “Welcome to Googleville America’s newest information superhighway begins On Oregon’s Silicon Prairie,”
Willamette Week, June 4, 2008, http://wweek.com/portland/article-9089-welcome_to_googleville.html.
40. Rich Miller, “Google & Facebook: A Tale of Two Data Centers,” Data Center Knowledge, August 2, 2010, http://
www.datacenterknowledge.com/archives/2010/08/10/google-facebook-a-tale-of-two-data-centers/
41. Ibid.
42. Alex Barkinka, “From textiles to tech, the state’s newest crop,” Reese News Lab, April 13, 2011, http://
reesenews.org/2011/04/13/from-textiles-to-tech-the-states-newest-crop/14263/.
43. “Textile & Apparel Overview,” North Carolina in the Global Economy, http://www.ncglobaleconomy.com/textiles/
overview.shtml.
44. Rich Miller, “The Apple-Google Data Center Corridor,” Data Center knowledge, August 4, 2009, http://
www.datacenterknowledge.com/archives/2009/08/04/the-apple-google-data-center-corridor/.
45. “2010 Decennial Census from the US Census Bureau,” http://factfinder.census.gov/bkmk/cf/1.0/en/place/Lenoir city,
North Carolina/POPULATION/DECENNIAL_CNT.
46. North Carolina in the Global Economy. Retrieved from http://www.soc.duke.edu/NC_GlobalEconomy/furniture/
workers.shtml
47. Frank Langfitt, Furniture Work Shifts From N.C. To South China. National Public Radio, December 1, 2009, http://
www.npr.org/templates/story/story.php?storyId=121576791&ft=1&f=121637143; Dan Morse, In North Carolina,
Furniture Makers Try to Stay Alive,” Wall Street Journal, February 20, 2004, http://www.wsj.com/articles/
SB107724173388134838; Robert Lacy, “Whither North Carolina Furniture Manufacturing,” Federal Reserve Bank of
Richmond, Working Paper Series, September 2004, https://www.richmondfed.org/~/media/richmondfedorg/publications/
research/working_papers/2004/pdf/wp04-7.pdf
48. Stephen Shankland, “Google gives itself leeway for N.C., data center,” Cnet, December 5, 2008, http://
news.cnet.com/8301-1023_3-10114349-93.html; Bill Bradley, “Cities Keep Giving Out Money for Server Farms, See
Very Few Jobs in Return,” Next City, August 15, 2013, https://nextcity.org/daily/entry/cities-keep-giving-out-money-forserver-farms-see-few-jobs-in-return.
49. Katherine Noyes, “Google Taps North Carolina for New Datacenter,” E-Commerce Times, January 19, 2007, http://
www.ecommercetimes.com/story/55266.html?wlc=1255976822
50. Getahn Ward, “Google to invest in new Clarksville data center,” Tennessean, December 22, 2015, http://
www.tennessean.com/story/money/real-estate/2015/12/21/google-invest-500m-new-clarksville-data-center/77474046/.
51. Ingrid Burrington, “The Environmental Toll of a Netflix Binge,” Atlantic, December 16, 2015, http://www.theatlantic.com/
technology/archive/2015/12/there-are-no-clean-clouds/420744/.
52. Mark Bergen, “After Gates, Google Splurges on Green With Largest Renewable Energy Buy for Server Farms,” Re/code,
December 3, 2015, http://recode.net/2015/12/03/after-gates-google-splurges-on-green-with-largest-renewable-energy-buyfor-server-farms/.

53. Burrington, “The Environmental Toll of a Netflix Binge.”; Richard Maxwell and Toby Miller, Greening the media (New
York: Oxford Univeristy Press, 2012)
54. “Finnish Paper Industry Uses Court Order to Block Government Protest,” IndustriAll Global Union, http://www.industriallunion.org/archive/icem/finnish-paper-industry-uses-court-order-to-block-government-protest.
55. Terhi Kinnunen and Tarmo Virki, “Stora to cut 985 jobs, close mills despite protests,” Reuter, January 17, 2008, http://
www.reuters.com/article/storaenso-idUSL1732158220080117; “Workers react to threat of closure of paper pulp mills,”
European Foundation, March 3, 2008, http://www.eurofound.europa.eu/observatories/eurwork/articles/workers-react-tothreat-of-closure-of-paper-pulp-mills.
56. David Cord, “Welcome to Finland,” The Helsinki Times, April 9, 2009, http://www.helsinkitimes.fi/helsinkitimes/2009apr/
issue15-95/helsinki_times15-95.pdf.
57. Elina Kervinen, Google is busy turning the old Summa paper mill into a data centre. Helsingin Sanomat International Edition,
October 9, 2010, https://web.archive.org/web/20120610020753/http://www.hs.fi/english/article/Google+is+busy
+turning+the+old+Summa+paper+mill+into+a+data+centre/1135260141400.
58. “Google invests 450M in expansion of Hamina data centre,” Helsinki Times, November 4, 2013, http://
www.helsinkitimes.fi/business/8255-google-invests-450m-in-expansion-of-hamina-data-centre.html.
59. “Revealed: Google’s new mega data center in Finland,” Pingdon, September 15, 2010, http://
royal.pingdom.com/2010/09/15/googles-mega-data-center-in-finland/
60. Ibid.
61. Shiv Mehta, “What's Google Strategy for the Russian Market?” Investopedia, July 28, 2015, http://www.investopedia.com/
articles/investing/072815/whats-google-strategy-russian-market.asp.

P.226

P.227

House,
City,
World,
Nation,
Globe
NATACHA ROUSSEL

This timeline starts in Brussels and is an attempt to situate some of the events
in the life, death and revival of the Mundaneum in relation to both local and
international events. By connecting several geographic locations at different
scales, this small research provokes cqrrelations in time and space that could help
formulate questions about the ways local events repeatedly mirror and
recompose global situations. Hopefully, it can also help to see which
contextual elements in the first iteration of the Mundaneum were different from
the current situation of our information economy.
The ambitious project of the Mundaneum was imagined by Paul Otlet with support of Henri
La Fontaine at the end of the 19th century. At that time colonialism was at its height,
bringing a steady stream of income to occidental countries which created a sense of security
that made everything seem possible. According to some of the most forward thinking persons
of the time it felt as if the intellectual and material benefits of rational thinking could
universally become the source of all goods. Far from any actual move towards independence,
the first tensions between colonial/commercial powers were starting to manifest themselves.
Already some conflicts erupted, mainly to defend commercial interests such as during the
Fashoda crisis and the Boers war. The sense of strength brought to colonial powers by the
large influx of money was however quickly tempered by World War I that was about to
shake up modern European society.
In this context Henri La Fontaine, energised by Paul Otlet's encompassing view of
classification systems and standards, strongly associates the Mundaneum project with an ideal
of world peace. This was a conscious process of thought; they believed that this universal
archive of all knowledge represented a resource for the promotion of education towards the

development of better social relations. While Otlet and La Fontaine were not directly
concerned with economical and colonial issues, their ideals were nevertheless fed by the
wealth of the epoch. The Mundaneum archives were furthermore established with a clear
intention, and a major effort was done to include documents that referred to often neglected
topics or that could be considered as alternative thinking, such as the well known archives of
the feminist movement in Belgium and information on anarchism and pacifism. In line with
the general dynamism caused by a growing wealth in Europe at the turn of the century, the
Mundaneum project seemed to be always growing in size and ambition. It also clearly
appears that the project was embedded in the international and 'politico-economical' context
of its time and in many aspects linked to a larger movement that engaged civil society towards
a proto-structure of networked society. Via the development of infrastructures for
communication and international regulations, Henri La Fontaine was part of several
international initiatives. For example he launched the 'Bureau International de la paix' as
early as 1907 and a few years after, in 1910, the 'International Union of Associations'.
Overall his interventions helped to root the process of archive collection in a larger network
of associations and regulatory structures. Otlet's view of archives and organisation extended
to all domains and La Fontaine asserted that general peace could be achieved through social
development by the means of education and access to knowledge. Their common view was
nurtured by an acute perception of their epoch, they observed and often contributed to most
of the major experiments that were triggered by the ongoing reflection about the new
organisation modalities of society.
The ever ambitious process of building the Mundaneum
From The Itinerant Archive (print):
archives took place in the context of a growing
Museology merged with the
internationalisation of society, while at the same time the
International Institute of Bibliography
social gap was increasing due to the expansion of
(IIB) which had its offices in the
same building. The ever-expanding
industrial society. Furthermore, the internationalisation of
index card catalog had already been
finances and relations did not only concern industrial
accessible to the public since 1914.
society, it also acted as a motivation to structure social
The project would be later known as
the World Palace or Mundaneum.
and political networks, among others via political
Here, Paul Otlet and Henri La
negotiations and the institution of civil society
Fontaine started to work on their
Encyclopaedia Universalis
organisations. Several broad structures dedicated to the
Mundaneum, an illustrated
regulation of international relations were created
encyclopaedia in the form of a mobile
simultaneous with the worldwide spreading of an
exhibition.
industrial economy. They aimed to formulate a world
view that would be based on international agreements
and communication systems regulated by governments and structured via civil society
organisations, rather than leaving everything to individual and commercial initiatives. Otlet
and La Fontaine spent a large part of their lives attempting to formulate a mondial society.
While La Fontaine clearly supported international networks of civil society organisations,
Otlet, according to Vincent Capdepuy[1], was the first person to use the French term
Mondialisation far ahead of his time, advocating what would become after World War II an
important movement that claimed to work for the development of an international regulatory

P.228

P.229

system. Otlet also mentioned that this 'Mondial' process was directly related to the necessity
of a new repartition and the regulation of natural goods (think: diamonds and gold ...), he
writes:
« Un droit nouveau doit remplacer alors le droit ancien pour préparer et organiser une
nouvelle répartition. La “question sociale” a posé le problème à l’intérieur ; “la question
internationale” pose le même problème à l’extérieur entre peuples. Notre époque a
poursuivi une certaine socialisation de biens. […] Il s’agit, si l’on peut employer cette
expression, de socialiser le droit international, comme on a socialisé le droit privé, et de
[2]
prendre à l’égard des richesses naturelles des mesures de “mondialisation”. » .

The approaches of La Fontaine and Otlet already bear certain differences, as one
(Lafontaine) emphasises an organisation based on local civil society structures which implies
direct participation, while the other (Otlet) focuses more on management and global
organisation managed by a regulatory framework. It is interesting to look at these early
concepts that were participating to a larger movement called 'the first mondialisation', and
understand how they differ from current forms of globalisation which equally involve private
and public instances and various infrastructures.
The project of Otlet and Lafontaine took place in an era of international agreements over
communication networks. It is known and often a subject of fascination that the global project
of the Mundaneum also involved the conception of a technical infrastructure and
communication systems that were conceived in between the two World Wars. Some of them
such as the Mondotheque were imagined as prospective possibilities, but others were already
implemented at the time and formed the basis of an international communication network,
consisting of postal services and telegraph networks. It is less acknowledged that the epoch
was also a time of international agreements between countries, structuring and normalising
international life; some of these structures still form the basis of our actual global economy,
but they are all challenged by private capitalist structures. The existing postal and telegraph
networks covered the entire planet, and agreements that regulated the price of the stamp
allowing for postal services to be used internationally, were recent. They certainly were the
first ones during where international agreements regulated commercial interests to the benefit
of individual communication. Henri Lafontaine directly participated in these processes by
asking for the postal franchise to be waived for the transport of documents between
international libraries, to the benefit of among others the Mundaneum. Lafontaine was also
an important promoter of larger international movements that involved civil society
organisations; he was the main responsible for the 'Union internationale des associations', that
acted as a network of information-sharing, setting up modalities for exchange to the general
benefit of civil society. Furthermore, concerns were raised to rethink social organisation that
was harmed by industrial economy and this issue was addressed in Brussels by the brand
new discipline of sociology. The 'Ecole de Bruxelles'[3] in which Otlet and La Fontaine both
took part was already very early on trying to formulate a legal discourse that could help
address social inequalities, and eventually come up with regulations that could help 'reengineer' social organisation.

The Mundaneum project differentiates itself from contemporary enterprises such as Google,
not only by its intentions, but also by its organisational context as it clearly inscribed itself in
an international regulatory framework that was dedicated to the promotion of local civil
society. How can we understand the similarities and differences between the development of
the Mundaneum project and the current knowledge economy? The timeline below attempts
to re-situate the different events related to the rise and fall of the Mundaneum in order to help
situate the differences between past and contemporary processes.

DATE

EVENT

TYPE

1865

The International Union of telegraph STANDARD
, is set up it is an important element of the
organisation of a mondial communication
network and will further become the

SCALE

WORLD

International Telecommunication
[4]
Union (UTI) that is still active in regulating

and standardizing radio-communication.

1870

Franco-Prussian war.

EVENT

WORLD

1874

The ONU creates the General Postal
[5]
Union and aims to federate international
postal distribution.

STANDARD

WORLD

1875

General Conference on Weights and
Measures in Sèvres, France.

STANDARD

WORLD

1882

Triple Alliance,

EVENT

WORLD

1889

Henri Lafontaine creates La Société Belge EVENT
de l'arbitrage et de la paix.

NATION

1890's

First colonial wars (Fachoda crisis, Boers war EVENT
...).

WORLD

1890

Henri Lafontaine meets Paul Otlet.

PERSON

CITY

1891

Franco-Russian entente', preliminary to
the Triple entente that will be signed in
1907.

EVENT

WORLD

1891

Henri Lafontaine publishes an essay Pour une PUBLICATION NATION
bibliographie de la paix.

P.230

renewed in 1902.

P.231

1893

Otlet and Lafontaine start together the Office ASSOCIATION CITY
International de Bibliologie
Sociologique (OIBS).

1894

Henri Lafontaine is elected senator of the
province of Hainaut and later senator of the
province of Liège-Brabant.

EVENT

NATION

1895 2-4 First Conférence de Bibliographie at
ASSOCIATION CITY
September which it is decided to create the Institut
International de Bibliographie (IIB)
founded by Henri La Fontaine.
WORLD

1900

Congrès bibliographique
international in Paris.

EVENT

1903

Creation of the international Women's
suffrage alliance (IWSA) that will later
become the International Alliance of
Women.

ASSOCIATION WORLD

1904

Entente cordiale

between France and
England which defines their mutual zone of
colonial influence in Africa.

EVENT

WORLD

1905

First Moroccan crisis.

EVENT

WORLD

1907 June Otlet and Lafontaine organise a Central

ASSOCIATION CITY

Office for International Associations
that will become the International Union
of Associations (IUA) at the first
Congrès mondial des associations
internationales in Brussels in May 1910.

1907

Henri Lafontaine is elected president of the
Bureau international de la paix that
he previously initiated.

1908 July Congrès bibliographique
international in Brussels.

PERSON

NATION

EVENT

CITY

ASSOCIATION WORLD
1910 May Official Creation of the International
union of associations (IUA). In 1914,
it federates 230 organizations, a little more
than half of them still exist. The IUA promotes
internationalist aspirations and desire for peace.

ASSOCIATION WORLD

1910
25-27
August

Le Congrès International de
Bibliographie et de Documentation

1911

ASSOCIATION WORLD
More than 600 people and institutions are
listed as IIB members or refer to their methods,
specifically the UDC.

1913

Henri Lafontaine is awarded the Nobel Price EVENT
for Peace.

WORLD

1914

Germany declares war to France and invades
Belgium.

WORLD

1916

PUBLICATION WORLD
Lafontaine publishes The great solution:
magnissima charta while in exile in the United
States.

1919

deals with issues of international cooperation
between non-governmental organizations and
with the structure of universal documentation.

Opening of the Mundaneum or Palais
at the Cinquantenaire park.

EVENT

EVENT

CITY

Mondial

1919 June The Traité de Versailles marks the end EVENT
of World War I and creation of the Societé
28
Des Nations (SDN) that will later become
the United Nations (UN).

WORLD

ASSOCIATION NATION

1924

Creation (within the IIB), of the Central
Classification Commission focusing on
development of the Universal Decimal
Classification (UDC).

1931

The IIB becomes the International
Institute of documentation (IID) and
in 1938 and is named International
Federation of documentation (IDF).

ASSOCIATION WORLD

1934

Publication of Otlet's book Traité de
documentation.

PUBLICATION WORLD

1934

The Mundaneum is closed after a governmental MOVE
decision. A part of the archives are moved to
Rue Fétis 44, Brussels, home of Paul Otlet.

the

1939
Invasion of Poland by Germany, start of World EVENT
September War II.

P.232

HOUSE

WORLD

P.233

1941

MOVE
Some files from the Mundaneum collections
concerning international associations, are
transferred to Germany. They are assumed to
have propaganda value.

WORLD

1944

Death of Paul Otlet. He is buried in Etterbeek EVENT
cemetery.

CITY

1947

The International Telecomunication
Union (UTI) is attached to the UN.

STANDARD

GLOBE

Two separate ITU committees, the

STANDARD

GLOBE

STANDARD

GLOBE

1956

Consultive Committee for
International Telephony (CCIF) and the
Consultive Committee for
International Telegraphy (CCIT) are

joined to form the CCITT, an institute to create
standards, recommendations and models for
telecommunications.

1963

American Standard Code for
Information Interchange (ASCII)

developed.

is

1966

The ARPANET project is initiated.

1974

Telenet,

1986

First meeting of the Internet Engineering
Task Force (IETF) , the US-located informal

STANDARD

GLOBE

1992

Creation of The Internet Society, an
American association with international
vocation.

STANDARD

WORLD

1993

Elio Di Rupo organises the transport of the
Mundaneum archives from Brussels to 76 rue
de Nimy in Mons.

MOVE

NATION

2012

Failure of the World Conference on

STANDARD

GLOBE

founded.

ASSOCIATION NATION

the first public version of the Internet STANDARD

WORLD

organization that promotes open standards
along the lines of "rough consensus and running
code".

International Telecommunications

(WCIT) to reach an international agreement
on Internet regulation.

ADDITIONAL TIMELINES

• https://www.timetoast.com/timelines/la-premiere-guerre-mondiale
• http://www.telephonetribute.com/timeline.html
• https://www.reseau-canope.fr/savoirscdi/societe-de-linformation/le-monde-du-livreet-de-la-presse/histoire-du-livre-et-de-la-documentation/biographies/paul-otlet.html
• http://monoskop.org/Otlet
• http://archives.mundaneum.org/fr/historique
REFERENCES
Last
Revision:
28·06·2016

1. https://cybergeo.revues.org/24903%7CVincent Capdepuy, In the prism of the words. Globalization and the philological
argument
2. Paul Otlet, 1916, Les Problèmes internationaux et la Guerre, les conditions et les facteurs de la vie internationale, Genève/
Paris, Kundig/Rousseau, p. 76.
3. http://www.philodroit.be/IMG/pdf/bf_-_le_droit_global_selon_ecole_de_bruxelles_-2014-3.pdf?lang=fr
4. http://www.itu.int/en/Pages/default.aspx
5. https://en.wikipedia.org/wiki/Universal_Postal_Union

P.234

P.235

The
Smart
City City of
Knowledge
DENNIS POHL

In Paul Otlet's words the Mundaneum is “an idea, an institution, a method, a
material corpus of works and collections, a building, a network.” It became a
lifelong project that he tried to establish together with Henri La Fontaine in
the beginning of the 20th century. The collaboration with Le Corbusier was
limited to the architectural draft of a centre of information, science, and
education, leading to the idea of a “World Civic Center” in Geneva.
Nevertheless the dialectical discourse between both Utopians did not restrict
itself to commissioned design, but reveals the relation between a specific
positivist conception of knowledge and architecture; the system of information
and the spatial distribution according to efficiency principles. A notion that lays
the foundation for what is now called the Smart City.
[1]

FORMULATING THE MUNDANEUM
“We’re on the verge of a historic moment for cities”

[2]

“We are at the beginning of a historic transformation in cities. At a time when the
concerns about urban equity, costs, health and the environment are intensifying,
unprecedented technological change is going to enable cities to be more efficient,
[3]
responsive, flexible and resilient.”

P.236

P.237

In 1927 Le Corbusier participated in the
design competition for the headquarters of
the League of Nations, but his designs were
rejected. It was then that he first met his
later cher ami Paul Otlet. Both were already
familiar with each other’s ideas and writings,
as evidenced by their use of schemes, but
also through the epistemic assumptions that
underlie their world views.

OTLET, SCHEME AND REALITY

CORBUSIER, CURRENT AND IDEAL
TRAFFIC CIRCULATION

Before meeting Le Corbusier, Otlet was
fascinated by the idea of urbanism as a
science, which systematically organizes all
elements of life in infrastructures of flows.
He was convinced to work with Van der
Swaelmen, who had already planned a
world city on the site of Tervuren near
Brussels in 1919.[4]

VAN DER SWAELMEN - TERVUREN, 1916

ends.

For Otlet it was the first time that two
notions from different practices came
together, namely an environment ordered
and structured according to principles of
rationalization and taylorization. On the one
hand, rationalization as an epistemic practice
that reduces all relationships to those of
definable means and ends. On the other
hand, taylorization as the possibility to
analyze and synthesize workflows according
to economic efficiency and productivity.
Nowadays, both principles are used
synonymously: if all modes of production are
reduced to labour, then its efficiency can be
rationally determined through means and

“By improving urban technology, it’s possible to significantly improve the lives of
billions of people around the world. […] we want to supercharge existing efforts in
areas such as housing, energy, transportation and government to solve real problems
[5]
that city-dwellers face every day.”

In the meantime, in 1922, Le Corbusier developed his theoretical model of the Plan Voisin,
which served as a blueprint for a vision of Paris with 3 million inhabitants. In the 1925
publication Urbanisme his main objective is to construct “a theoretically water-tight formula
to arrive at the fundamental principles of modern town planning.”[6] For Le Corbusier
“statistics are merciless things,” because they “show the past and foreshadow the future”[7],
therefore such a formula must be based on the objectivity of diagrams, data and maps.

CORBUSIER - SCHEME FOR THE TRAFFIC
CIRCULATION

P.238

P.239

OTLET'S FORMULA

Moreover, they “give us an exact picture of
our present state and also of former states;
[...] (through statistics) we are enabled to
penetrate the future and make those truths
our own which otherwise we could only
have guessed at.”[8] Based on the analysis of
statistical proofs he concluded that the
ancient city of Paris had to be demolished in
order to be replaced by a new one.
Nevertheless, he didn’t arrive at a concrete
formula but rather at a rough scheme.

A formula that includes every atomic entity
was instead developed by his later friend Otlet as an answer to the question he posed in
Monde, on whether the world can be expressed by a determined unifying entity. This is
Otlet’s dream: a “permanent and complete representation of the entire world,”[9] located in
one place.
Early on Otlet understood the active potential of Architecture and Urbanism as a dispositif, a
strategic apparatus, that places an individual in a specific environment and shapes his
understanding of the world.[10] A world that can be determined by ascertainable facts through
knowledge. He thought of his Traité de documentation: le livre sur le livre, théorie et pratique
as an “architecture of ideas”, a manual to collect and organize the world's knowledge, hand in
hand with contemporary architectural developments. As new modernist forms and use of
materials propagated the abundance of decorative
From A bag but is language nothing
elements, Otlet believed in the possibility of language as
of words:
a model of 'raw data', reducing it to essential information
and unambiguous facts, while removing all inefficient
Tim Berners-Lee: [...] Make a
beautiful website, but first give us the
assets of ambiguity or subjectivity.
“Information, from which has been removed all dross and
foreign elements, will be set out in a quite analytical way.
It will be recorded on separate leaves or cards rather than
being confined in volumes,” which will allow the
standardized annotation of hypertext for the Universal
Decimal Classification (UDC).[11] Furthermore, the
“regulation through architecture and its tendency of a
total urbanism would help towards a better understanding
of the book Traité de documentation and it's right
functional and holistic desiderata.”[12] An abstraction
would enable Otlet to constitute the “equation of
urbanism” as a type of sociology (S): U = u(S), because
according to his definition, urbanism “is an art of

unadulterated data, we want the data.
We want unadulterated data. OK, we
have to ask for raw data now. And
I'm going to ask you to practice that,
OK? Can you say "raw"?
Audience: Raw.
Tim Berners-Lee: Can you say
"data"?
Audience: Data.
TBL: Can you say "now"?
Audience: Now!
TBL: Alright, "raw data now"!
[...]

From La ville intelligente - Ville de la
connaissance:
Étant donné que les nouvelles formes
modernistes et l'utilisation de
matériaux propageaient l'abondance
d'éléments décoratifs, Paul Otlet
croyait en la possibilité du langage
comme modèle de « données brutes »,
le réduisant aux informations
essentielles et aux faits sans ambiguïté,

distributing public space in order to raise general human happiness; urbanization is the result
of all activities which a society employs in order to reach its proposed goal; [and] a material
expression of its organization.”[13] The scientific position, which determines all characteristic
values of a certain region by a systematic classification and observation, was developed by the
Scottish biologist and town planner Patrick Geddes, who Paul Otlet invited to present his
Town Planning Exhibition to an international audience at the 1913 world exhibition in
Gent.[14] What Geddes inevitably takes further is the positivist belief in a totality of science,
which he unfolds from the ideas of Auguste Comte, Frederic Le Play and Elisée Reclus in
order to reach a unified understanding of an urban development in a special context. This
position would allow to represent the complexity of an inhabited environment through data.[15]
THINKING THE MUNDANEUM

The only person that Otlet considered capable of the architectural realization of the
Mundaneum was Le Corbusier, whom he approached for the first time in spring 1928. In
one of the first letters he addressed the need to link “the idea and the building, in all its
symbolic representation. […] Mundaneum opus maximum.” Aside from being a centre of
documentation, information, science and education, the complex should link the Union of
International Associations (UIA), which was founded by La Fontaine and Otlet in 1907,
and the League of Nations. “A material and moral representation of The greatest Society of
the nations (humanity);” an international city located on an extraterritorial area in Geneva.[16]
Despite their different backgrounds, they easily understood each other, since they “did
frequently use similar terms such as plan, analysis, classification, abstraction, standardization
and synthesis, not only to bring conceptual order into their disciplines and knowledge
organization, but also in human action.”[17] Moreover, the appearance of common terms in
their most significant publications is striking. Such as spirit, mankind, elements, work, system
and history, just to name a few. These circumstances led both Utopians to think the
Mundaneum as a system, rather than a singular central type of building; it was meant to
include as many resources in the development process as possible. Because the Mundaneum
is “an idea, an institution, a method, a material corpus of works and collections, a building, a
network,”[18] it had to be conceptualized as an “organic plan with the possibility to expand on
different scales with the multiplication of each part.”[19] The possibility of expansion and an
organic redistribution of elements adapted to new necessities and needs, is what guarantees
the system efficiency, namely by constantly integrating more resources. By designing and
standardizing forms of life up to the smallest element, modernism propagated a new form of
living which would ensure the utmost efficiency. Otlet supported and encouraged Le
Corbusier with his words: “The twentieth century is called upon to build a whole new
civilization. From efficiency to efficiency, from rationalization to rationalization, it must so raise
itself that it reaches total efficiency and rationalization. […] Architecture is one of the best
bases not only of reconstruction (the deforming and skimpy name given to the whole of postwar activities) but of intellectual and social construction to which our era should dare to lay
claim.”[20] Like the Wohnmaschine, in Corbusier’s famous housing project Unité d'habitation,

P.240

P.241

the distribution of elements is shaped according to man's needs. The premise which underlies
this notion is that man's needs and desires can be determined, normalized and standardized
following geometrical models of objectivity.
“making transportation more efficient and lowering the cost of living, reducing energy
[21]
usage and helping government operate more efficiently”
BUILDING THE MUNDANEUM

In the first working phase, from March to September 1928, the plans for the Mundaneum
seemed more a commissioned work than a collaboration. In the 3rd person singular, Otlet
submitted descriptions and organizational schemes which would represent the institutional
structures in a diagrammatic manner. In exchange, Le Corbusier drafted the architectural
plans and detailed descriptions, which led to the publication N° 128 Mundaneum, printed
by International Associations in Brussels.[22] Le Corbusier seemed a little less enthusiastic
about the Mundaneum project than Otlet, mainly because of his scepticism towards the
League of Nations, which he called a “misguided” and “pre-machinist creation.”[23] The
rejection of his proposal for the Palace for the League of Nations in 1927, expressed with
anger in a public announcement, might also play a role. However, the second phase, from
September 1928 to August 1929, was marked by a strong friendship evidenced by the rise
of the international debate after their first publications, letters starting with cher ami and their
agreement to advance the project to the next level by including more stakeholders and
developing the Cité mondiale. This led to the second publication by Paul Otlet, La Cité
mondiale in February 1929, which unexpectedly traumatized the diplomatic environment in
Geneva. Although both tried to organize personal meetings with key stakeholders, the project
didn't find support for its realization, especially after Switzerland had withdrawn its offer of
providing extraterritorial land for Cité mondiale. Instead, Le Corbusier focussed on his Ville
Radieuse concept, which was presented at the 3rd CIAM meeting in Brussels in 1930.[24]
He considered Cité mondiale as “a closed case”, and withdrew himself from the political
environment by considering himself without any political color, “since the groups that gather
around our ideas are, militaristic bourgeois, communists, monarchists, socialists, radicals,
League of Nations and fascists. When all colors are mixed, only white is the result. That
stands for prudence, neutrality, decantation and the human search for truth.”[25]
GOVERNING THE MUNDANEUM

Le Corbusier considered himself and his work “apolitical” or “above politics”.[26] Otlet,
however, was more aware of the political force of his project. “Yet it is important to predict.
To know in order to predict and to predict in order to control, was Comte's positive
philosophy. Prediction doesn't cost a thing, was added by a master of contemporary urbanism
(Le Corbusier).”[27] Lobbying for the Cité mondiale project, That prediction doesn't cost
anything and is “preparing the ways for the coming years”, Le Corbusier wrote to Arthur

Fontaine and Albert Thomas from the International Labor Organization that prediction is
free and “preparing the ways for the coming years”.[28] Free because statistical data is always
available, but he didn't seem to consider that prediction is a form of governing. A similar
premise underlies the present domination of the smart city ideologies, where large amounts of
data are used to predict for the sake of efficiency. Although most of the actors behind these
ideas consider themselves apolitical, the governmental aspect is more than obvious. A form of
control and government, which is not only biopolitical but rather epistemic. The data is not
only used to standardize units for architecture, but also to determine categories of knowledge
that restrict life to the normality of what can be classified. What becomes clear in this
juxtaposition of Le Corbusier's and Paul Otlet's work is that the standardization of
architecture goes hand in hand with an epistemic standardization because it limits what can
be thought, experienced and lived to what is already there. This architecture has to be
considered as an “epistemic object”, which exemplifies the cultural logic of its time.[29] By its
presence it brings the abstract cultural logic underlying its conception into the everyday
experience, and becomes with material, form and function an actor that performs an epistemic
practice on its inhabitants and users. In this case: the conception that everything can be
known, represented and (pre)determined through data.

P.242

P.243

1. Paul Otlet, Monde: essai d'universalisme - Connaissance du Monde, Sentiment du Monde, Action organisee et Plan du Monde
, (Bruxelles: Editiones Mundeum 1935): 448.
2. Steve Lohr, Sidewalk Labs, a Start-Up Created by Google, Has Bold Aims to Improve City Living New, in York Times
11.06.15, http://www.nytimes.com/2015/06/11/technology/sidewalk-labs-a-start-up-created-by-google-has-bold-aims-toimprove-city-living.html?_r=0, quoted here is Dan Doctoroff, founder of Google Sidewalk Labs

3. Dan Doctoroff, 10.06.2015, http://www.sidewalkinc.com/relevant
4. Giuliano Gresleri and Dario Matteoni. La Città Mondiale: Andersen, Hébrard, Otlet, Le Corbusier. (Venezia: Marsilio,
1982): 128; See also: L. Van der Swaelmen, Préliminaires d'art civique (Leynde 1916): 164 – 299.
5. Larry Page, Press release, 10.06.2015, http://www.sidewalkinc.com/
6. Le Corbusier, “A Contemporary City” in The City of Tomorrow and its Planning, (New York: Dover Publications 1987):
164.
7. ibid.: 105 & 126.
8. ibid.: 108.
9. Rayward, W Boyd, “Visions of Xanadu: Paul Otlet (1868–1944) and Hypertext” in Journal of the American Society for
Information Science, (Volume 45, Issue 4, May 1994): 235.
10. The french term dispositif or translated apparatus, refers to Michel Foucault's description of a merely strategic function, “a
thoroughly heterogeneous ensemble consisting of discourses, institutions, architectural forms regulatory decisions, laws,
administrative measures, scientific statements, philosophical, moral and philanthropic propositions – in short, the said as much as
the unsaid.” This distinction allows to go beyond the mere object, and rather deconstruct all elements involved in the production
conditions and relate them to the distribution of power. See: Michel Foucault, “Confessions of the Flesh (1977) interview”, in
Power/Knowledge Selected Interviews and Other Writings, Colin Gordon (Ed.), (New York: Pantheon Books 1980): 194 –
200.
11. Bernd Frohmann, “The role of facts in Paul Otlet’s modernist project of documentation”, in European Modernism and the
Information Society, Rayward, W.B. (Ed.), (London: Ashgate Publishers 2008): 79.
12. “La régularisation de l’architecture et sa tendance à l’urbanisme total aident à mieux comprendre le livre et ses propres
desiderata fonctionnels et intégraux.” See: Paul Otlet, Traité de documentation, (Bruxelles: Mundaneum, Palais Mondial,
1934): 329.
13. “L'urbanisme est l'art d'aménager l'espace collectif en vue d'accroîte le bonheur humain général; l'urbanisation est le résulat de
toute l'activité qu'une Société déploie pour arriver au but qu'elle se propose; l'expression matérielle (corporelle) de son
organisation.” ibid.: 205.
14. Thomas Pearce, Mettre des pierres autour des idées, Paul Otlet, de Cité Mondiale en de modernistische stedenbouw in de jaren
1930, (KU Leuven: PhD Thesis 2007): 39.
15. Volker Welter, Biopolis Patrick Geddes and the City of Life. (Cambridge, Mass: MIT 2003).
16. Letter from Paul Otlet to Le Corbusier and Pierre Jeanneret, Brussels 2nd April 1928. See: Giuliano Gresleri and Dario
Matteoni. La Città Mondiale: Andersen, Hébrard, Otlet, Le Corbusier. (Venezia: Marsilio, 1982): 221-223.
17. W. Boyd Rayward (Ed.), European Modernism and the Information Society. (London: Ashgate Publishers 2008): 129.
18. “Le Mundaneum est une Idée, une Institution, une Méthode, un Corps matériel de traveaux et collections, un Edifice, un
Réseau.” See: Paul Otlet, Monde: essai d'universalisme - Connaissance du Monde, Sentiment du Monde, Action organisee et
Plan du Monde, (Bruxelles: Editiones Mundeum 1935): 448.
19. Giuliano Gresleri and Dario Matteoni. La Città Mondiale: Andersen, Hébrard, Otlet, Le Corbusier. (Venezia: Marsilio,
1982): 223.
20. Le Corbusier, Radiant City, (New York: The Orion Press 1964): 27.
21. http://www.sidewalkinc.com/
22. Giuliano Gresleri and Dario Matteoni. La Città Mondiale: Andersen, Hébrard, Otlet, Le Corbusier. (Venezia: Marsilio,
1982): 128
23. ibid.: 232.
24. ibid.: 129.
25. ibid.: 255.
26. Eric Paul Mumford, The CIAM Discourse on Urbanism, 1928-1960, (Cambridge: MIT Press, 2002): 20.
27. “Savoir, pour prévoir afin de pouvoir, a été la lumineuse formule de Comte. Prévoir ne coûte rien, a ajouté un maître de
l'urbanisme contemporain (Le Corbusier).” See: Paul Otlet, Monde: essai d'universalisme - Connaissance du Monde,
Sentiment du Monde, Action organisee et Plan du Monde, (Bruxelles: Editiones Mundeum 1935): 407.
28. Giuliano Gresleri and Dario Matteoni. La Città Mondiale: Andersen, Hébrard, Otlet, Le Corbusier. (Venezia: Marsilio,
1982): 241.
29. Considering architecture as an object of knowledge formation, the term “epistemic object” by the German philosopher Günter
Abel, helps bring forth the epistemic characteristic of architecture. Epistemic objects according to Abel are these, on which our
knowledge and empiric curiosity are focused. They are objects that perform an active contribution to what can be thought and
how it can be thought. Moreover because one cannot avoid architecture, it determines our boundaries (of thinking). See:
Günter Abel, Epistemische Objekte – was sind sie und was macht sie so wertvoll?, in: Hingst, Kai-Michael; Liatsi, Maria
(ed.), (Tübingen: Pragmata, 2008).

P.244

P.245

La ville
intelligente
- Ville
de la
connaissance
DENNIS POHL

Selon les mots de Paul Otlet, le Mundaneum est « une idée, une institution,
une méthode, un corpus matériel de travaux et de collections, une construction,
un réseau. » Il est devenu le projet d'une vie qu'il a tenté de mettre sur pied
avec Henri La Fontaine au début du 20e siècle. La collaboration avec Le
Corbusier se limitait au projet architectural d'un centre d'informations, de
science et d'éducation qui conduira à l'idée d'un « World Civic Center », à
Genève. Cependant, le discours dialectique entre les deux utopistes ne s'est
pas limité à une réalisation commissionnée, il a révélé la relation entre une
conception positiviste spécifique de la connaissance et l'architecture ; le
système de l'information et la distribution spatiale d'après des principes
d'efficacité. Une notion qui a apporté la base de ce qu'on appelle aujourd'hui
la Ville intelligente.
[1]

FORMULER LE MONDANEUM
[2]

« Nous sommes à l'aube d'un moment historique pour les villes » « Nous sommes à
l'aube d'une transformation historique des villes À une époque où les préoccupations
pour l'égalité urbaine, les coûts, la santé et l'environnement augmentent, un
changement technologique sans précédent va permettre aux villes d'être plus efficaces,
[3]
réactives, flexibles et résistantes. »

P.246

P.247

OTLET, SCHÉMA ET RÉALITÉ

CORBUSIER, CIRCULATION DU TRAFIC
ACTUELLE ET IDÉALE

En 1927, Le Corbusier a participé à une
compétition de design pour le siège de la
Ligue des nations. Cependant, ses
propositions furent rejetées. C'est à ce
moment qu'il a rencontré, pour la première
fois, son cher ami Paul Otlet. Tous deux
connaissaient déjà les idées et les écrits de
l'autre, comme le montre leur utilisation des
plans, mais également les suppositions
épistémiques à la base de leur vues sur le
monde. Avant de rencontrer Le Corbusier,
Paul Otlet était fasciné par l'idée d'un
urbanisme scientifique qui organise
systématiquement tous les éléments de la vie
par des infrastructures de flux. Il avait été
convaincu de travailler avec Van der
Saelmen, qui avait déjà prévu une ville
monde sur le site de Tervuren, près de
Bruxelles, en 1919.[4]

Pour Paul Otlet, c'était la première fois que
deux notions de pratiques différentes se
rassemblaient, à savoir un environnement
ordonné et structuré d'après des principes de
rationalisation et de taylorisme. D'un côté, la
rationalisation: une pratique épistémique qui
réduit toutes les relations à des moyens et
des fins définissables. D'un autre, le
taylorisme: une possibilité d'analyse et de
synthèse des flux de travail fonctionnant
selon les règles de l'efficacité économique et
VAN DER SWAELMEN - TERVUREN, 1916
productive. De nos jours, les deux principes
sont considérés comme des synonymes : si
tous les modes de production sont réduits au
travail, alors l'efficacité peut être rationnellement déterminée à par les moyens et les fins.
« En améliorant la technologie urbaine, il est possible d'améliorer de manière
significative la vie de milliards de gens dans le monde. […] nous voulons encourager
les efforts existants dans des domaines comme l'hébergement, l'énergie, le transport et le
gouvernement afin de résoudre des problèmes réels auxquels les citadins font face au
[5]
quotidien. »

Pendant ce temps, en 1922, Le Corbusier avait développé son modèle théorique du Plan
Voisin qui a servi de projet pour une vision de Paris avec trois millions d'habitants. Dans la
publication de 1925 d'Urbanisme, son objectif principal est la construction « d'un édifice
théoretique rigoureux, à formuler des principes fondamentaux d'urbanisme moderne. »[6] Pour
Le Corbusier, « la statistique est implacable », car « la statistique montre le passé et esquisse
l’avenir »[7], dès lors, une telle formule doit être basée sur l'objectivité des diagrammes, des
données et des cartes.

P.248

P.249

De plus, « la statisique donne la situation
exacte de l’heure présente, mais aussi les
états antérieurs ; [...] (à travers les
statistiques) nous pouvons pénétrer dans
l’avenir et aquérir des certitudes anticipées ».
[8]
À partir de l'analyse des preuves
statistiques, il conclut que la vieille ville de
Paris devait être démolie afin d'être
remplacée par une nouvelle. Cependant, il
n'est pas arrivé à une formule concrète, mais
à un plan approximatif.
CORBUSIER - SCHÉMA POUR UNE
CIRCULATION DU TRAFIC

À la place, une formule comprenant chaque
entité atomique fut développée par son ami
Paul Otlet en réponse à la question qu'il
publia dans Monde pour savoir si le monde
pouvait être exprimé par une entité
unificatrice déterminée. Voici le rêve de
Paul Otlet : une « représentation
permanente et complète du monde entier »[9]
dans un même endroit.

Paul Otlet comprit rapidement le potentiel
actif de l'architecture et de l'urbanisme en
LA FORMULE D'OTLET
tant que dispositif stratégique qui place un
individu dans un environnement spécifique
et façonne sa compréhension du monde.[10]
Un monde qui peut être déterminé par des faits vérifiables à travers la connaissance. Il a
pensé son Traité de documentation: le livre sur le livre, théorie et pratique comme une
« architecture des idées », un manuel pour collecter et organiser la connaissance du monde
en l'association avec les développements architecturaux contemporains.

Étant donné que les nouvelles formes modernistes et
l'utilisation de matériaux propageaient l'abondance
d'éléments décoratifs, Paul Otlet croyait en la possibilité
du langage comme modèle de « données brutes », le
réduisant aux informations essentielles et aux faits sans
ambiguïté, tout en se débarrassant de tous les éléments
inefficaces et subjectifs.

From A bag but is language nothing
of words:
Tim Berners-Lee: [...] Make a
beautiful website, but first give us the
unadulterated data, we want the data.
We want unadulterated data. OK, we
have to ask for raw data now. And
I'm going to ask you to practice that,
OK? Can you say "raw"?

« Des informations, dont tout déchet et élément étrangers
Audience: Raw.
ont été supprimés, seront présentées d'une manière assez
analytique. Elles seront encodées sur différentes feuilles
Tim Berners-Lee: Can you say
"data"?
ou cartes plutôt que confinées dans des volumes, » ce qui
permettra l'annotation standardisée de l'hypertexte pour
Audience: Data.
la classification décimale universelle ( CDU ).[11] De plus,
TBL: Can you say "now"?
la « régulation à travers l'architecture et sa tendance à un
urbanisme total favoriseront une meilleure compréhension Audience: Now!
du livre Traité de documentation ainsi que du désidérata
TBL: Alright, "raw data now"!
fonctionnel et holistique adéquat. »[12] Une abstraction
[...]
permettrait à Paul Otlet de constituer « l'équation de
l'urbanisme » comme un type de sociologie : U = u(S),
car selon sa définition, l'urbanisme « L'urbanisme est l'art
From The Smart City - City of
d'aménager l'espace collectif en vue d'accroître le
Knowledge:
As new modernist forms and use of
bonheur humain général ; l'urbanisation est le résultat de
materials propagated the abundance
toute l'activité qu'une Société déploie pour arriver au but
of decorative elements, Otlet believed
qu'elle se propose ; l'expression matérielle (corporelle)
in the possibility of language as a
model of 'raw data', reducing it to
de son organisation. »[13] La position scientifique qui
essential information and
détermine toutes les valeurs caractéristiques d'une
unambiguous facts, while removing all
certaine région par une classification et une observation
inefficient assets of ambiguity or
subjectivity.
systémiques a été avancée par le biologiste écossais et
planificateur de villes, Patrick Geddes, qui fut invité par
Paul Otlet pour l'exposition universelle de 1913 à Gand
afin de présenter à un public international sa Town Planning Exhibition.[14] Patrick Geddes
allait inévitablement plus loin dans sa croyance positiviste en une totalité de la science, une
croyance qui découle des idées d'Auguste Compte, de Frederic Le Play et d'Elisée Reclus,
pour atteindre une compréhension unifiée du développement urbain dans un contexte
spécifique. Cette position permettrait de représenter à travers des données la complexité d'un
environnement habité.[15]
PENSER LE MUNDANEUM

La seule personne que Paul Otlet estimait capable de réaliser l'architecture du Mundaneum
était Le Corbusier, qu'il approcha pour la première fois au printemps 1928. Dans une de

P.250

P.251

ses premières lettres, il évoqua le besoin de lier « l'idée et la construction, dans toute sa
représentation symbolique. […] Mundaneum opus maximum.” En plus d'être un centre de
documentation, d'informations, de science et d'éducation, le complexe devrait lier l'Union des
associations internationales (UAI), fondée par La Fontaine et Otlet en 1907, et la Ligue
des nations. « Une représentation morale et matérielle de The greatest Society of the nations
(humanité) ; » une ville internationale située dans une zone extraterritoriale à Genève.[16]
Malgré les différents milieux dont ils étaient issus, ils pouvaient facilement se comprendre
puisqu'ils « utilisaient fréquemment des termes similaires comme plan, analyse, classification,
abstraction, standardisation et synthèse, non seulement pour un ordre conceptuel dans leurs
disciplines et l'organisation de leur connaissance, mais également dans l'action humaine. »[17]
De plus, l'apparence des termes dans leurs publications les plus importantes est frappante.
Pour n'en nommer que quelques-uns : esprit, humanité, travail, système et histoire. Ces
circonstances ont conduit les deux utopistes à penser le Mundaneum comme un système
plutôt que comme un type de construction central singulier ; le processus de développement
cherchait à inclure autant de ressources que possible. Puisque « Le Mundaneum est une
Idée, une Institution, une Méthode, un Corps matériel de travaux et collections, un Édifice,
un Réseau. »[18] il devait être conceptualisé comme un « plan organique avec possibilité
d'expansion à différentes échelles grâce à la multiplication de chaque partie. »[19] La
possibilité d'expansion et la redistribution organique des éléments adaptées à de nouvelles
nécessités et besoins garantit l'efficacité du système, à savoir en intégrant plus de ressources
en permanence. En concevant et normalisant des formes de vie, même pour le plus petit
élément, le modernisme a propagé une nouvelle forme de vie qui garantirait l'efficacité
optimale. Paul Otlet a soutenu et encouragé Le Corbusier avec ces mots : « Le vingtième
siècle est appelé à construire une toute nouvelle civilisation. De l'efficacité à l'efficacité, de la
rationalisation à la rationalisation, il doit s'élever et atteindre l'efficacité et la rationalisation
totales. […] L'architecture est l'une des meilleures bases, non seulement de la reconstruction
(le nom étriqué et déformant donné à toutes les activités d'après-guerre), mais à la
construction intellectuelle et sociale à laquelle notre ère devrait oser prétendre. »[20] Comme la
Wohnmaschine, dans le célèbre projet d'habitation du Corbusier, Unité d'habitation, la
distribution des éléments est établie en fonction des besoins de l'homme. Le principe qui sous
tend cette notion est l'idée que les besoins et les désirs de l'homme peuvent être déterminés,
normalisés et standardisés selon des modèles géométriques d'objectivité.
« rendre le transport plus efficace et diminuer le coût de la vie, la consommation
[21]
d'énergie et aider le gouvernement à fonctionner plus efficacement »
CONSTRUIRE LE MUNDANEUM

Dans la première phase de travail, de mars à septembre 1928, les plans du Mundaneum
ressemblaient plus à un travail commissionné qu'à une collaboration. À la troisième personne
du singulier, Paul Otlet a soumis des descriptions et des projets organisationnels qui
représenteraient les structures institutionnelles de manière schématique. En échange, Le
Corbusier a réalisé le brouillon des plans architecturaux et les descriptions détaillées, ce qui

conduisit à la publication du N° 128 Mundaneum, imprimée par Associations
Internationales à Bruxelles.[22] Le Corbusier semblait un peu moins enthousiaste que Paul
Otlet concernant le Mundaneum, principalement à cause de son scepticisme vis-à-vis de la
Ligue des nations dont il disait qu'elle était « fourvoyée » et « une création prémachiniste ».[23]
Le rejet de sa proposition pour le palais de la Ligue des nations en 1927, exprimé avec
colère dans une déclaration publique, jouait peut-être également un rôle. Cependant, la
seconde phase, de septembre 1928 à août 1929, fut marquée par une amitié solide dont
témoigne l'amplification du débat international après leurs premières publications, des lettres
commençant par « cher ami », leur accord concernant l'avancement du projet au prochain
niveau avec l'intégration d'actionnaires et le développement de la Cité mondiale. Cela
conduisit à la seconde publication de Paul Otlet, la Cité mondiale, en février 1929, qui
traumatisa de manière inattendue l'environnement diplomatique de Genève. Même si tous
deux tentèrent d'organiser des entretiens personnels avec des acteurs clés, le projet ne trouva
pas de soutien pour sa réalisation, d'autant moins après le retrait de la proposition de la
Suisse de fournir un territoire extraterritorial pour la Cité mondiale. À la place, Le Corbusier
s'est concentré sur son concept de la Ville Radieuse qui fut présenté lors du 3e CIAM à
Bruxelles, en 1930.[24] Il considérait la Cité mondiale comme « une affaire classée » et s'était
retiré de l'environnement politique en considérant qu'il n'avait aucune couleur politique
« puisque les groupes qui se rassemblent autour de nos idées sont des bourgeois militaristes,
des communistes, des monarchistes, des socialistes, des radicaux, la Ligue des nations et des
fascistes. Lorsque toutes les couleurs sont mélangées, seul le blanc ressort. Il représente la
prudence, la neutralité, la décantation et la recherche humaine de la vérité. »[25]
DIRIGER LE MUNDANEUM

Le Corbusier considérait son travail et lui-même comme étant « apolitiques » ou « au-dessus
de la politique ».[26] Cependant, Paul Otlet était plus conscient de la force politique de ce
projet. « Savoir, pour prévoir afin de pouvoir, a été la lumineuse formule de Comte. Prévoir
ne coûte rien, a ajouté un maitre de l'urbanisme contemporain (Le Corbusier). »[27] En faisant
le lobby du projet de la Cité mondiale, cette prévision ne coûte rien et « prépare les années à
venir », Le Corbusier écrivit à Arthur Fontaine et Albert Thomas depuis l'Organisation
internationale de travail que la prévision était gratuite et « préparait les années à venir ».[28]
Gratuite, car les données statistiques sont toujours disponibles, cependant, il ne semblait pas
considérer la prévision comme une forme de pouvoir. Une prémisse similaire est à l'origine
de la domination actuelle des idéologies de la ville intelligente où de grandes quantités de
données sont utilisées pour prévoir au nom de l'efficacité. Même si la plupart des acteurs
derrière ces idées se considèrent apolitiques, l'aspect gouvernemental est plus qu'évident.
Une forme de contrôle et de gouvernement n'est pas seulement biopolitique, mais plutôt
épistémique. Les données sont non seulement utilisées pour standardiser les unités pour
l'architecture, mais également pour déterminer les catégories de connaissance qui restreignent
la vie à la normalité dans laquelle elle peut être classée. Dans cette juxtaposition du travail de
Le Corbusier et Paul Otlet, il devient clair que la standardisation de l'architecture va de pair

P.252

P.253

avec une standardisation épistémique, car elle limite ce qui peut être pensé, ressenti et vécu à
ce qui existe déjà. Cette architecture doit être considérée comme un « objet épistémique »
qui illustre la logique culturelle de son époque.[29] Par sa présence, elle apporte la logique
culturelle abstraite sous-jacente à sa conception dans l'expérience quotidienne et devient, au
côté de la matière, de la forme et de la fonction, un acteur qui accomplit une pratique
épistémique sur ses habitants et ses usagers. Dans ce cas : la conception selon laquelle tout
peut être connu, représenté et (pré)déterminé à travers les données.

Last
Revision:
2·08·2016

1. Paul Otlet, Monde : essai d'universalisme - Connaissance du Monde, Sentiment du Monde, Action organisée et Plan du
Monde, (Bruxelles : Editiones Mundeum 1935) : 448.

P.254

P.255

2. Steve Lohr, Sidewalk Labs, a Start-Up Created by Google, Has Bold Aims to Improve City Living New, dans le York Times
11/06/15, http://www.nytimes.com/2015/06/11/technology/sidewalk-labs-a-start-up-created-by-google-has-bold-aims-toimprove-city-living.html?_r=0, citation de Dan Doctoroff, fondateur de Google Sidewalk Labs
3. Dan Doctoroff, 10/06/2015, http://www.sidewalkinc.com/relevant
4. Giuliano Gresleri et Dario Matteoni. La Città Mondiale : Andersen, Hébrard, Otlet, Le Corbusier. (Venise : Marsilio,
1982) : 128 ; Voir aussi : L. Van der Swaelmen, Préliminaires d'art civique (Leynde 1916) : 164 - 299.
5. Larry Page, Communiqué de presse, 10/06/2015, http://www.sidewalkinc.com/
6. Le Corbusier, « Une Ville Contemporaine » dans Urbanisme, (Paris : Les Éditions G. Crès & Cie 1924) : 158.
7. ibid. : 115 et 97.
8. ibid. : 100.
9. Rayward, W Boyd, « Visions of Xanadu: Paul Otlet (1868–1944) and Hypertext » dans le Journal of the American Society
for Information Science, (Volume 45, Numéro 4, mai 1994) : 235.
10. Le terme français « dispositif » fait référence à la description de Michel Foucault d'une fonction simplement stratégique, « un
ensemble réellement hétérogène constitué de discours, d'institutions, de formes architecturales, de décisions régulatrices, de lois,
de mesures administratives, de déclarations scientifiques, philosophiques, morales et de propositions philanthropiques. En
résumé, ce qui est dit comme ce qui ne l'est pas. » La distinction permet d'aller plus loin que le simple objet, et de déconstruire
tous les éléments impliqués dans les conditions de production et de les lier à la distribution des pouvoirs. Voir : Michel Foucault,
« Confessions of the Flesh (1977) interview », dans Power/Knowledge Selected Interviews and Other Writings, Colin
Gordon (Éd.), (New York : Pantheon Books 1980) : 194 - 200.
11. Bernd Frohmann, « The role of facts in Paul Otlet’s modernist project of documentation », dans European Modernism and the
Information Society, Rayward, W.B. (Éd.), (Londres : Ashgate Publishers 2008) : 79.
12. « La régularisation de l’architecture et sa tendance à l’urbanisme total aident à mieux comprendre le livre et ses propres
désiderata fonctionnels et intégraux. » Voir : Paul Otlet, Traité de documentation, (Bruxelles : Mundaneum, Palais Mondial,
1934) : 329.
13. ibid. : 205.
14. Thomas Pearce, Mettre des pierres autour des idées, Paul Otlet, de Cité Mondiale en de modernistische stedenbouw in de
jaren 1930, (KU Leuven : PhD Thesis 2007) : 39.
15. Volker Welter, Biopolis Patrick Geddes and the City of Life. (Cambridge, Mass : MIT 2003).
16. Lettre de Paul Otlet à Le Corbusier et Pierre Jeanneret, Bruxelles, 2 avril 1928. Voir : Giuliano Gresleri et Dario Matteoni.
La Città Mondiale : Andersen, Hébrard, Otlet, Le Corbusier. (Venise : Marsilio, 1982) : 221-223.
17. W. Boyd Rayward (Éd.), European Modernism and the Information Society. (Londres : Ashgate Publishers 2008) : 129.
18. Voir : Paul Otlet, Monde : essai d'universalisme - Connaissance du Monde, Sentiment du Monde, Action organisée et Plan du
Monde, (Bruxelles : Editiones Mundeum 1935) : 448.
19. Giuliano Gresleri et Dario Matteoni. La Città Mondiale : Andersen, Hébrard, Otlet, Le Corbusier. (Venise : Marsilio,
1982) : 223.
20. Le Corbusier, Radiant City, (New York : The Orion Press 1964) : 27.
21. http://www.sidewalkinc.com/
22. Giuliano Gresleri et Dario Matteoni. La Città Mondiale : Andersen, Hébrard, Otlet, Le Corbusier. (Venise : Marsilio,
1982) : 128
23. ibid. : 232.
24. ibid. : 129.
25. ibid. : 255.
26. Eric Paul Mumford, The CIAM Discourse on Urbanism, 1928-1960, (Cambridge : MIT Press, 2002) : 20.
27. Voir : Paul Otlet, Monde : essai d'universalisme - Connaissance du Monde, Sentiment du Monde, Action organisée et Plan du
Monde, (Bruxelles : Editiones Mundeum 1935) : 407.
28. Giuliano Gresleri et Dario Matteoni. La Città Mondiale : Andersen, Hébrard, Otlet, Le Corbusier. (Venise : Marsilio,
1982) : 241.
29. En considérant l'architecture comme un objet de formation du savoir, le terme « objet épistémique » du philosophe Günter Abel
aide à produire la caractéristique épistémique de l'architecture. D'après Günter Abel, les objets épistémiques sont ceux sur
lesquels notre connaissance et notre curiosité empirique sont concentrés. Ce sont des objets ont une contribution active en ce qui
concerne ce qui peut être pensé et la manière dont cela peut être pensé. De plus, puisque personne ne peut éviter l'architecture,
elle détermine nos limites (de pensée). Voir : Günter Abel, Epistemische Objekte – was sind sie und was macht sie so
wertvoll?, dans : Hingst, Kai-Michael; Liatsi, Maria (éd.), (Tübingen : Pragmata, 2008).

The
Itinerant
Archive
The project of the Mundaneum and its many protagonists is undoubtedly
linked to the context of early 19th century Brussels. King Leopold II , in an
attempt to awaken his countries' desire for greatness, let a steady stream of
capital flow into the city from his private colonies in Congo. Located on the
crossroad between France, Germany, The Netherlands and The United
Kingdom, the Belgium capital formed a fertile ground for ambitious institutional
projects with international ambitions, such as the Mundaneum. Its tragic
demise was unfortunately equally at home in Brussels. Already in Otlet's
lifetime, the project fell prey to the dis-interest of its former patrons, not
surprising after World War I had shaken their confidence in the beneficial
outcomes of a global knowledge infrastructure. A complex entanglement of disinterested management and provincial politics sent the numerous boxes and
folders on a long trajectory through Brussels, until they finally slipped out of
the city. It is telling that the Capital of Europe has been unable to hold on to its
pertinent past.

P.256

P.257

This tour is a kind of itinerant monument to the Mundaneum in Brussels. It
takes you along the many temporary locations of the archives, guided by the
words of care-takers, reporters and biographers that have crossed it's path.
Following the increasingly dispersed and dwindling collection through the city
and centuries, you won't come across any material trace of its passage. You
might discover many unknown corners of Brussels though.
1919: MUSÉE INTERNATIONAL

Outre le Répertoire bibliographique universel et un Musée de la presse qui
comptera jusqu’à 200.000 spécimens de journaux du monde entier, on y trouvera
quelque 50 salles, sorte de musée de l’humanité technique et scientifique. Cette
décennie représente l’âge d’or pour le Mundaneum, même si le gros de ses
collections fut constitué entre 1895 et 1914, avant l’existence du Palais Mondial.
L’accroissement des collections ne se fera, par la suite, plus jamais dans les mêmes
[1]
proportions.
En 1920, le Musée international et les institutions créées par Paul Otlet et Henri
La Fontaine occupent une centaine de salles. L’ensemble sera désormais appelé
Palais Mondial ou Mundaneum. Dans les années 1920, Paul Otlet et Henri La
Fontaine mettront également sur pied l’Encyclopedia Universalis Mundaneum,
[2]
encyclopédie illustrée composée de tableaux sur planches mobiles.

Start at Parc du Cinquantenaire 11,
Brussels in front of the entrance of
what is now Autoworld.

In 1919, significantly delayed by World War I, the Musée international finally opened. The
project had been conceptualised by Paul Otlet and Henri Lafontaine already ten years
earlier and was meant to be a mix between a documentation center, conference venue and
educational display. It occupied the left wing of the magnificent buildings erected in the Parc
Cinquantenaire for the Grand Concours International des Sciences et de l'industrie.
Museology merged with the International Institute of
From House, City, World, Nation,
Bibliography (IIB) which had its offices in the same
Globe:
building. The ever-expanding index card catalog had
The ever ambitious process of
already been accessible to the public since 1914. The
building the Mundaneum archives
took place in the context of a growing
project would be later known as the World Palace or
internationalisation of society, while at
Mundaneum. Here, Paul Otlet and Henri La Fontaine
the same time the social gap was
started to work on their Encyclopaedia Universalis
increasing due to the expansion of
Mundaneum, an illustrated encyclopaedia in the form of a industrial society. Furthermore, the
internationalisation of finances and
mobile exhibition.
relations did not only concern
Walk under
the colonnade
to your
right, and
you will
recognise the
former entrance

industrial society, it also acted as a
motivation to structure social and
political networks, among others via
political negotiations and the
institution of civil society organisations.

of Le Palais Mondial.

Only a few years after its delayed opening, the ambitious project started to lose support from
the Belgium government, who preferred to use the vast exhibition spaces for commercial
activities. In 1922 and 1924, Le Palais Mondial was temporarily closed to make space for
an international rubber fair.

P.258

P.259

1934: MUNDANEUM MOVED TO HOME OF PAUL OTLET

Si dans de telles conditions le Palais Mondial devait définitivement rester fermé, il
semble bien qu’il n’y aurait plus place dans notre Civilisation pour une institution
d’un caractère universel, inspirée de l’idéal indiqué en ces mots à son entrée : Par
la Liberté, l’Égalité et la Fraternité mondiales − dans la Foi, l’Espérance et la
[3]
Charité humaines − vers le Travail, le Progrès et la Paix de tous !
Cato, my wife, has been absolutely devoted to my work. Her savings and jewels
testify to it; her invaded house testify to it; her collaboration testifies to it; her wish
to see it finished after me testifies to it; her modest little fortune has served for the
[4]
constitution of my work and of my thought.

Walk under the Arc de Triumph and exit
the Jubelfeestpark on your left. On
Avenue des Nerviens turn left into
Sint Geertruidestraat. Turn left onto
Kolonel Van Gelestraat and right onto
Rue Louis Hap. Turn left onto
Oudergemselaan and right onto Rue
Fetis 44.

In 1934, the ministry of public works decided to close the Mundaneum in order to make
place for an extension of the Royal Museum of Art and History. An outraged Otlet posted in
front of the closed entrance with his colleagues, but to no avail. The official address of the
Mundaneum was 'temporarily' transferred to the house at Rue Fétis 44 where he lived with
his second wife, Cato Van Nederhasselt.

P.260

P.261

Part of the archives were moved Rue Fétis, but many boxes and most of the card-indexes
remained stored in the Cinquantenaire building. Paul Otlet continued a vigorous program of
lectures and meetings in other places, including at home.

1941: MUNDANEUM IN PARC LÉOPOLD

The upper galleries ... are one big pile of rubbish, one inspector noted in his report.
It is an impossible mess, and high time for this all to be cleared away. The Nazis
evidently struggled to make sense of the curious spectacle before them. The
institute and its goals cannot be clearly defined. It is some sort of ... 'museum for
the whole world,' displayed through the most embarrassing and cheap and
[5]
primitive methods.
Distributed in two large workrooms, in corridors, under stairs, and in attic rooms
and a glass-roofed dissecting theatre at the top of the building, this residue
gradually fell prey to the dust and damp darkness of the building in its lower
regions, and to weather and pigeons admitted through broken panes of glass in the
roof in the upper rooms. On the ground floor of the building was a dimly lit, small,

steeply-raked lecture theatre. On either side of its dais loomed busts of the
[6]
founders.
Derrière les vitres sales, j’aperçus un amoncellement de livres, de liasses de papiers
contenus par des ficelles, des dossiers dressés sur des étagères de fortune. Des
feuilles volantes échappées des cartons s’amoncelaient dans les angles de l’immense
pièce, du papier pelure froissé se mêlait au gravat et à la poussière. Des récipients
de fortune avaient été placés entre les caisses et servaient à récolter l’eau de pluie.
Un pigeon avait réussi à pénétrer à l’intérieur et se cognait inlassablement contre
[7]
l’immense baie vitrée qui fermait le bâtiment.
Annually in this room in the years after Otlet's death until the late 1960's, the
busts garlanded with floral wreaths for the occasion, Otlet and La Fontaine's
colleagues and disciples, Les Amis du Palais Mondial, met in a ceremony of
remembrance. And it was Otlet, theorist and visionary, who held their
imaginations most in beneficial thrall as they continued to work after his death, just
as they had in those last days of his life, among the mouldering, discorded
collections of the Mundaneum, themselves gradually overtaken by age, their
[8]
numbers dwindling.

Exit the Fétisstraat onto Chaussee de
Wavre, turn right and follow into the
Vijverstraat. Turn right on Rue Gray,
cross Jourdan plein into Parc Leopold.
Right at the entrance is the building
of l’Institut d’Anatomie Raoul
Warocqué.

In 1941, the Nazi-Germans occupying Belgium wanted to use the spaces in the Palais du
Cinquantenaire but they were still used to store the collections of the Mundaneum. They
decided to move the archives to Parc Léopold except for a mass of periodicals, which were
simply destroyed. A vast quantity of files related to international associations were assumed to
have propaganda value for the German war effort. This part of the archive was transferred
back to Berlin and apparently re-appeared in the Stanford archives (?) many years later.
They must have been taken there by American soldiers after World War II.
Until the 1970's, the Mundaneum (or what was left of it) remained in the decaying building
in Parc Léopold. Georges Lorphèvre and André Colet continued to carry on the work of the
Mundaneum with the help of a few now elderly Amis du Palais Mondial, members of the
association with the same name that was founded in 1921. It is here that the Belgian
librarian André Canonne, the Australian scholar Warden Boyd Rayward and the Belgian
documentary-maker Françoise Levie came across the Mundaneum archives for the very first
time.

P.262

P.263

2009: OFFICES GOOGLE BELGIUM

A natural affinity exists between Google's modern project of making the world’s
information accessble and the Mundaneum project of two early 20th century
Belgians. Otlet and La Fontaine imagined organizing all the world's information on paper cards. While their dream was discarded, the Internet brought it back to
reality and it's little wonder that many now describe the Mundaneum as the paper
Google. Together, we are showing the way to marry our paper past with our
[9]
digital future.

Exit the park onto Steenweg op
Etterbeek and walk left to number
176-180.

In 2009, Google Belgium opened its offices at the Chaussée d'Etterbeek 180. It is only a
short walk away from the last location that Paul Otlet has been able to work on the
Mundaneum project.
Celebrating the discovery of its "European roots", the company has insisted on the
connection between the project of Paul Otlet, and their own mission to organize the world's
information and make it universally accessible and useful. To celebrate the desired
connection to the Forefather of documentation, the building is said to have a Mundaneum
meeting room. In the lobby, you can find a vitrine with one of the drawers filled with UDCindex cards, on loan from the Mundaneum archive center in Mons.

1944: GRAVE OF PAUL OTLET

When I am no more, my documentary instrument (my papers) should be kept
together, and, in order that their links should become more apparent, should be
sorted, fixed in successive order by a consecutive numbering of all the cards (like
[10]
the pages of a book).
Je le répète, mes papiers forment un tout. Chaque partie s’y rattache pour
constituer une oeuvre unique. Mes archives sont un "Mundus Mundaneum", un

P.264

P.265

outil conçu pour la connaissance du monde. Conservez-les; faites pour elles ce que
[11]
moi j’aurais fait. Ne les détruisez pas !

O P T I O N A L : Continue on Chaussée
d'Etterbeek toward Belliardstraat.
Turn left until you reach Rue de
Trèves. Turn right onto Luxemburgplein
and take bus 95 direction Wiener.

Paul Otlet dies in 1944 when he is 76 years old. His grave at the cemetary of Ixelles is
decorated with a globe and the inscription "Il ne fut rien sinon Mundanéen" (He was nothing
if not Mundanéen).
Exit the cemetary and walk toward
Avenue de la Couronne. At the
roundabout, turn left onto
Boondaalsesteenweg. Turn left onto
Boulevard Géneral Jacques and take
tram 25 direction Rogier.

Halfway your tram-journey you pass Square Vergote (Stop: Georges Henri), where Henri
Lafontaine and Mathilde Lhoest used to live. Statesman and Nobel-prize winner Henri
Lafontaine worked closely with Otlet and supported his projects throughout his life.
Get off at the stop Coteaux and follow
Rogierstraat until number 67.

1981: STORAGE AT AVENUE ROGIER 67

C'est à ce moment que le conseil d'administration, pour sauver les activités
(expositions, prêts gratuits, visites, congrès, exposés, etc.) vendit quelques pièces. Il
n'y a donc pas eu de vol de documents, contrairement à ce que certains affirment,
[12]
garantit de Louvroy.
In fact, not one of the thousands of objects contained in the hundred galleries of the
Cinquantenaire has survived into the present, not a single maquette, not a single
telegraph machine, not a single flag, though there are many photographs of the
[13]
exhibition rooms.
Mais je me souviens avoir vu à Bruxelles des meubles d'Otlet dans des caves
inondées. On dit aussi que des pans entiers de collections ont fait le bonheur des
amateurs sur les brocantes. Sans compter que le papier se conserve mal et que des
[14]
dépôts mal surveillés ont pollué des documents aujourd'hui irrécupérables.

This part of the walk takes about 45"
and will take you from the Ixelles
neighbourhood through Sint-Joost to
Schaerbeek; from high to low Brussels.

Continue on Steenweg op Etterbeek,
cross Rue Belliard and continue onto
Jean Reyplein. Take a left onto
Chaussée d'Etterbeek. If you prefer,
you can take a train at Bruxelles

P.266

P.267

Schumann Station to North Station, or
continue following Etterbeekse
steenweg onto Square Marie-Louise.
Continue straight onto
Gutenbergsquare, Rue Bonneels which
becomes Braemtstraat at some point.
Cross Chausséee de Louvain and turn
left onto Oogststraat. Continue onto
Place Houwaert and Dwarsstraat.
Continue onto Chaussée de Haecht and
follow onto Kruidtuinstraat. Take a
slight right onto Rue Verte, turn left
onto Kwatrechtstraat and under the
North Station railroad tracks. Turn
right onto Rue du Progrès.
Rogierstraat is the first street on
your left.

In 1972, we find Les Amis du Mundaneum back at Chaussée de Louvain 969.
Apparently, the City of Brussels has moved the Mundaneum out of Parc Léopold into a
parking garage, 'a building rented by the ministry of Finances', 'in the direction of the SaintJosse-ten-Node station'.[15]. 10 years later, the collection is moved to the back-house of a
building at Avenue Rogier 67.
As a young librarian, Andre Canonne visits the collection at this address until he is in a
position to move the collection elsewhere.

1985: ESPACE MUNDANEUM UNDER PLACE ROGIER

On peut donc croire sauvées les collections du "Mundaneum" et a bon droit
espérer la fin de leur interminable errance. Au moment ou nous écrivons ces lignes,
des travaux d’aménagement d'un "Espace Mundaneum" sont en voie
[16]
d’achèvement au cour de Bruxelles.
L'acte fut signé par le ministre Philippe Monfils, président de l'exécutif. Son
prédécesseur, Philippe Moureaux, n'était pas du même avis. Il avait même acheté
pour 8 millions un immeuble de la rue Saint-Josse pour y installer le musée. Il
fallait en effet sauver les collections, enfouies dans l'arrière-cour d'une maison de
repos de l'avenue Rogier! (...) L'étage moins deux, propriété de la commune de
Saint-Josse, fut cédé par un bail emphytéotique de 30 ans à la Communauté, avec
un loyer de 800.000 F par mois. (...) Mais le Mundaneum est aussi en passe de
devenir une mystérieuse affaire en forme de pyramide. A l'étage moins un, la
commune de Saint-Josse et la société française «Les Pyramides» négocient la
construction d'un Centre de congrès (il remplace celui d'un piano-bar luxueux)
d'ampleur. Le montant de l'investissement est évalué à 150 millions (...) Et puis,
ce musée fantôme n'est pas fermé pour tout le monde. Il ouvre ses portes! Pas pour
y accueillir des visiteurs. On organise des soirées dansantes, des banquets dans la
grande salle. Deux partenaires (dont un traiteur) ont signé des contrats avec
l'ASBL Centre de lecture publique de la communauté française. Contrats
[17]
reconfirmés il y a quinze jours et courant pendant 3 ans encore!
Mais curieusement, les collections sont toujours avenue Rogier, malgré l'achat
d'un local rue Saint-Josse par la Communauté française, et malgré le transfert
officiel (jamais réalisé) au «musée» du niveau - 2 de la place Rogier. Les seules
choses qu'il contient sont les caisses de livres rétrocédées par la Bibliothèque
[18]
Royale qui ne savait qu'en faire.

P.268

P.269

Follow Avenue Rogier. Turn left onto
Brabantstraat until you cross under
the railroad tracks. Place Rogier is
on your right hand, marked by a large
overhead construction of a tilted
white dish.

In 1985, Andre Canonne convinced Les Amis du Palais Mondial to transfer the
responsability for the collection and mission of the association to la Centre de lecture
publique de la Communauté française based in Liege, the organisation that he now has
become the director of. It was agreed that the Mundaneum should stay in Brussels; the
documents mention a future location at the Rue Saint Josse 49, a building apparently
acquired for that purpose by the Communauté française.
Five years later, plans have changed. In 1990, the archives are being moved from their
temporary storage in Avenue Rogier and the Royal Library of Belgium to a new location in
Place Rogier -2. Under the guidance of André Canonne a "Mundaneum space" will be
opened in the center of Brussels, right above the Metro station Rogier. Unfortunately,
Canonne dies just weeks after the move has begun, and the Brussels' Espace Mundaneum
never opens its doors.
In the following three years, the collection remains in the same location but apparently
without much supervision. Journalists report that doors were left unlocked and that Metro
passengers could help themselves to handfuls of documents. The collection has in the mean
time attracted the attention of Elio di Rupo, at that time minister of education at la
Communauté française. It marks the beginning of the end of The Mundaneum as an itinerant
archive in Brussels.

You can end the tour here, or add two optional destinations:

1934: IMPRIMERIE VAN KEERBERGHEN IN RUE PIERS

O P T I O N A L :

(from Place Rogier, 20") Follow
Kruidtuinlaan onto Boulevard Baudouin
and onto Antwerpselaan, down in the
direction of the canal. At the
Sainctelette bridge, cross the canal
and take a slight left into Rue
Adolphe Lavallée. Turn left onto
Piersstraat. Alternatively, at Rogier
you can take a Metro to Ribaucourt
station, and walk from there.

At number 101 we find Imprimerie Van Keerberghen, the printer that produced and
distributed Le Traité de Documentation . In 1934, Otlet did not have enough money to pay
for the full print-run of the book and therefore the edition remained with Van Keerberghen
who would distribute the copies himself through mailorders. The plaque on the door dates
from the period that the Traité was printed. So far we have not been able to confirm whether
this family-business is still in operation.

P.270

P.271

RUE OTLET

O P T I O N A L :

(from Rue Piers, ca. 30") Follow Rue
Piers and turn left into
Merchtemsesteenweg and follow until
Chaussée de Gand, turn left. Turn
right onto Ransfortstraat and cross
Chaussée de Ninove. Turn left to
follow the canal onto Mariemontkaai
and left at Rue de Manchester to cross
the water. Continue onto
Liverpoolstraat, cross Chaussee de
Mons and continue onto Dokter De

Meersmanstraat until you meet Rue
Otlet.

(from Place Rogier, ca. 30") Follow
Boulevard du Jardin Botanique and turn
left onto Adolphe Maxlaan and Place De
Brouckère. Continue onto Anspachlaan,
turn right onto Rue du Marché aux
Poulets. Turn left onto
Visverkopersstraat and continue onto
Rue Van Artevelde. Continue straight
onto Anderlechtschesteenweg, continue
onto Chaussée de Mons. Turn left onto
Otletstraat. Alternatively you can
take tram 51 or 81 to Porte
D'Anderlecht.

Although it seems that this dreary street is named to honor Paul Otlet, it already
mysteriously appears on a map dated 1894 when Otlet was not even 26 years old [19] and
again on a map from 1910, when the Mundaneum had not yet opened it's doors.[20]

P.272

P.273

OUTSIDE BRUSSELS

1998: THE MUNDANEUM RESURRECTED

Bernard Anselme, le nouveau ministre-président de la Communauté française,
négocia le transfert à Mons, au grand dam de politiques bruxellois furieux de voir
cette prestigieuse collection quitter la capitale. (...) Cornaqué par Charles Picqué et
Elio Di Rupo, le transfert à Mons n'a pas mis fin aux ennuis du Mundaneum.
On créa en Hainaut une nouvelle ASBL chargée d'assurer le relais. C'était sans
compter avec l'ASBL Célès, héritage indépendant du CLPCF, évoqué plus haut,
que la Communauté avait fini par dissoudre. Cette association s'est toujours
considérée comme propriétaire des collections, au point de s'opposer régulièrement
à leur exploitation publique. Les faits lui ont donné raison: au début du mois de

mai, le Célès a obtenu du ministère de la Culture que cinquante millions lui soient
[21]
versés en contrepartie du droit de propriété.
The reestablishment of the Mundaneum in Mons as a museum and archive is in
my view a major event in the intellectual life of Belgium. Its opening attracted
[22]
considerable international interest at the time.
Le long des murs, 260 meubles-fichiers témoignaient de la démesure du projet.
Certains tiroirs, ouverts, étaient éclairés de l’intérieur, ce qui leur donnait une
impression de relief, de 3D. Un immense globe terrestre, tournant lentement sur
lui-même, occupait le centre de l’espace. Sous une voie lactée peinte à même le
plafond, les voix de Paul Otlet et d’Henri La Fontaine, interprétés par des
comédiens, s’élevaient au fur et à mesure que l’on s’approchait de tel ou tel
[23]
document.
L’Otletaneum, c’est à dire les archives et papiers personnels ayant appartenu à
Paul Otlet, représentait un fonds important, peu connu, mal répertorié, que l’on
pouvait cependant quantifier à la place qu’il occupait sur les étagères des réserves
situées à l’arrière du musée. Il y avait là 100 à 150 mètres de rayonnages, dont
une partie infime avait fait l’objet d’un classement. Le reste, c’est à dire une
soixantaine de boîtes à bananes‚ était inexploré. Sans compter l’entrepôt de
Cuesmes où le travail de recensement pouvait être estimé, me disait-il, à une
[24]
centaine d’années...
Après des multiples déménagements, un travail laborieux de sauvegarde entamé
par les successeurs, ce patrimoine unique ne finit pas de révéler ses richesses et ses
surprises. Au-delà de cette démarche originale entamée dans un esprit
philanthropique, le centre d’archives propose des collections documentaires à valeur
[25]
historique, ainsi que des archives spécialisées.

In 1993, after some armwrestling between different local fractions of the Parti Socialiste, the
collections of the Mundaneum are moved from Place Rogier to former departement store
L'independance in Mons, 40 kilometres from Brussels and home to Elio Di Rupo. Benoît
Peeters and François Schuiten design a theatrical scenography that includes a gigantic globe
and walls decorated with what is if left of the wooden card catalogs. The center opens in
1998 under the direction of librarian Jean-François Füeg .
In 2015, Mons is elected Capital of Europe with the slogan "Mons, where culture meets
technology". The Mundaneum archive center plays a central role in the media-campaigns
and activities leading up to the festive year. In that same period, the center undergoes a largescale renovation to finally brings the archive facilities up to date. A new reading room is
named after André Canonne, the conference room is called Utopia. The mise-en-scène of
Otlet's messy office is removed, but otherwise the scenography remains largely unchanged.

P.274

P.275

2007: CRYSTAL COMPUTING

Jean-Paul Deplus, échevin (adjoint) à la culture de la ville, affiche ses ambitions.
« Ce lieu est une illustration saisissante de ce que des utopistes visionnaires ont
apporté à la civilisation. Ils ont inventé Google avant la lettre. Non seulement ils
l’ont fait avec les seuls outils dont ils disposaient, c’est-à- dire de l’encre et du

papier, mais leur imagination était si féconde que l’on a retrouvé les dessins et
croquis de ce qui préfigure Internet un siècle plus tard. » Et Jean-Pol Baras
d’ajouter «Et qui vient de s’installer à Mons ? Un “data center” de Google ...
[26]
Drôle de hasard, non ? »
Dans une ambiance où tous les partenaires du «projet Saint-Ghislain» de Google
savouraient en silence la confirmation du jour, les anecdotes sur la discrétion
imposée durant 18 mois n’ont pas manqué. Outre l’utilisation d’un nom de code,
Crystal Computing, qui a valu un jour à Elio Di Rupo d’être interrogé sur
l’éventuelle arrivée d’une cristallerie en Wallonie («J’ai fait diversion comme j’ai
pu !», se souvient-il), un accord de confidentialité liait Google, l’Awex et l’Idea,
notamment. «A plusieurs reprises, on a eu chaud, parce qu’il était prévu qu’au
[27]
moindre couac sur ce point, Google arrêtait tout»
Beaucoup de show, peu d’emplois: Pour son data center belge, le géant des
moteurs de recherche a décroché l’un des plus beaux terrains industriels de
Wallonie. Résultat : à peine 40 emplois directs et pas un euro d’impôts. Reste que
la Région ne voit pas les choses sous cet angle. En janvier, a appris Le Vif/
L’Express, le ministre de l’Economie Jean-Claude Marcourt (PS) a notifié à
Google le refus d’une aide à l’expansion économique de 10 millions d’euros.
Motif : cette aide était conditionnée à la création de 110 emplois directs, loin d’être
atteints. Est-ce la raison pour laquelle aucun ministre wallon n’était présent le 10
avril aux côtés d’Elio Di Rupo ? Au cabinet Marcourt, on assure que les relations
avec l’entreprise américaine sont au beau fixe : « C’est le ministre qui a permis ce
nouvel investissement de Google, en négociant avec son fournisseur d’électricité
[28]
(NDLR : Electrabel) une réduction de son énorme facture.

In 2005, Elio di Rupo succeeds in bringing a company "Crystal Computing" to the region,
code name for Google inc. who plans to build a data-center at Saint Ghislain, a prime
industrial site close to Mons. Promising 'a thousand jobs', the presence of Google becomes a
way for Di Rupo to demonstrate that the Marshall Plan for Wallonia, an attempt to "step up
the efforts taken to put Wallonia back on the track to prosperity" is attaining its goals. The
first data-center opens in 2007 and is followed by a second one opening in 2015. The
direct impact on employment in the region is estimated to be somewhere between 110[29] and
120 jobs.[30]

P.276

P.277

Last
Revision:
2·08·2016

1. Paul Otlet (1868-1944) Fondateur du mouvement bibliogique international Par Jacques Hellemans (Bibliothèque de
l’Université libre de Bruxelles, Premier Attaché)
2. Jacques Hellemans. Paul Otlet (1868-1944) Fondateur du mouvement bibliogique international
3. Paul Otlet. Document II in: Traité de documentation (1934)
4. Paul Otlet. Diary (1938), Quoted in: W. Boyd Rayward. The Universe of Information : The Work of Paul Otlet for
Documentation and International Organisation (1975)
5. Alex Wright. Cataloging the World: Paul Otlet and the Birth of the Information Age (2014)
6. Warden Boyd Rayward. Mundaneum: Archives of Knowledge (2010)
7. Françoise Levie. L'homme qui voulait classer le monde: Paul Otlet et le Mundaneum (2010)
8. Warden Boyd Rayward. Mundaneum: Archives of Knowledge (2010)
9. William Echikson. A flower of computer history blooms in Belgium (2013) http://googlepolicyeurope.blogspot.be/2013/02/
a-flower-of-computer-history-blooms-in.html
10. Testament Paul Otlet, 1942.01.18*, No. 67, Otletaneum. Quoted in: W. Boyd Rayward. The Universe of Information :
The Work of Paul Otlet for Documentation and International Organisation (1975)
11. Paul Otlet cited in Françoise Levie, Filmer Paul Otlet, Cahiers de la documentation – Bladen voor documentatie – 2012/2
12. Le Soir, 27 juillet 1991
13. Warden Boyd Rayward. Mundaneum: Archives of Knowledge (2010)
14. Le Soir, 17 juin 1998
15. http://www.reflexcity.net/bruxelles/photo/72ca206b2bf2e1ea73dae1c7380f57e3
16. André Canonne. Introduction to the 1989 facsimile edition of Le Traité de documentation File:TDD ed1989 preface.pdf
17. Le Soir, 24 juillet 1991
18. Le Soir, 27 juillet 1991
19. http://www.reflexcity.net/bruxelles/plans/4-cram-fin-xixe.html
20. http://gallica.bnf.fr/ark:/12148/btv1b84598749/f1.item.zoom
21. Le Soir, 17 juin 1998
22. Warden Boyd Rayward. Mundaneum: Archives of Knowledge (2010)
23. Françoise Levie, Filmer Paul Otlet, Cahiers de la documentation – Bladen voor documentatie – 2012/2
24. Françoise Levie, L'Homme qui voulait classer le monde: Paul Otlet et le Mundaneum, Impressions Nouvelles, Bruxelles,
2006
25. Stéphanie Manfroid, Les réalités d’une aventure documentaire, Cahiers de la documentation – Bladen voor documentatie –
2012/2
26. Jean-Michel Djian, Le Mundaneum, Google de papier, Le Monde Magazine, 19 december 2009
27. Libre Belgique (27 april 2007)
28. Le Vif, April 2013
29. Le Vif, April 2013

30. http://www.rtbf.be/info/regions/detail_google-va-investir-300-millions-a-saint-ghislain?id=7968392

P.278

P.279

Crossreadings

Les
Pyramides
"A pyramid is a structure whose outer surfaces are triangular and converge to
a single point at the top"
[1]

A slew of pyramids can be found in all of Paul Otlet's drawers. Knowledge
schemes and diagrams, drawings and drafts, designs, prototypes and
architectural plans (including works by Le Corbusier and Maurice Heymans)
employ the pyramid to provide structure, hierarchy, precise path and finally
access to the world's synthesized knowledge. At specific temporal crosssections, these plans were criticized for their proximity to occultism or
monumentalism. Today their rich esoteric symbolism is still readily apparent
and gives reason to search for possible spiritual or mystical underpinnings of
the Mundaneum.
Paul Otlet (1926):
“Une immense pyramide est à construire. Au sommet y travaillent Penseurs,
Sociologues et grands Artistes. Le sommet doit rejoindre la base où s’agitent les
masses, mais la base aussi doit être disposée de manière qu’elle puisse rejoindre le
[2]
sommet.”

P.280

P.281

[3]

[4]

Paul Otlet, Species
Inscription: "Il ne fut rien
sinon Mundanéen"
Mundaneum.
Mundaneum, Mons.
Personal papers of Paul
Otlet (MDN). Fonds
Encyclopaedia Universalis
Mundaneum (EUM),
document No. 8506.

La Pyramide des

Qui scit ubi scientia
Tomb at the grave of Paul
Bibliographies. In: Paul habenti est proximus.
Otlet
Otlet, Traité de
Who knows where
documentation: le livre sur science is, is about to have
le livre, théorie et pratique it. The librarian is helped
(Bruxelles: Editiones
by collaborators:
Mundaneum, 1934),
Bibliotecaire-adjoints,
rédacteurs, copistes, gens
290.
de service.
[5]

[6]

[7]

Design for the
Sketch for La
An axonometric view of Plan of the Mundaneum Perspective of the
Mundaneum, Section and Mondotheque. Paul Otlet, the Mundaneum gives the by M.C. Heymans
Mundaneum by M.C.
facades by Le Corbusier 1935?
effect of an aerial
Heymans
photograph of an
archeological site —
Egyptian, Babylonian,
Assyrian, ancient
American (Mayan and
Aztec) or Peruvian. These
historical reminiscences are
striking. Remember the
important building works
of the Mayas, who were
the zenith of ancient
American civilization.
These well-known ruins
(Uxmal, Chichen-Itza,
Palenque on the Yucatan
peninsula, and Copan in
Guatemala) represent a
“metaphysical architecture”
of special cities of religious
cults and burial grounds,
cities of rulers and priests;
pyramids, cathedrals of the
sun, moon and stars; holy
places of individual gods;
graduating pyramids and
terraced palaces with
architectural objects
conceived in basic

[8]

[9]

[10]

Paul Otlet, Cellula
Mundaneum (1936).
Mundaneum, Mons.
Personal papers of Paul
Otlet (MDN). Fonds
Affiches (AFF).

As soon as all forms of life Sketch for Mundaneum
are categorized, classified World City. Le
and determined,
Corbusier, 1929
individuals will become
numeric "dividuals" in sets,
subsets or classes.

[12]

Atlas Bruxelles –
Urbaneum - Belganeum Mundaneum. Page de
garde du chapitre 991 de
l'Atlas de Bruxelles.

[13]

The universe (which
others call the Library) is
composed of an indefinite
and perhaps infinite
number of triangular
galleries, with vast air
shafts between, surrounded
by very low railings. From
any of the triangles one
can see, interminably, the
upper and lower floors.
The distribution of the
galleries is invariable.

P.282

[11]

The ship wherein Theseus
and the youth of Athens
returned had thirty oars,
and was preserved by the
Athenians down even to
the time of Demetrius
Phalereus, for they took
away the old planks as
they decayed, putting in
new and stronger timber in
their place, insomuch that
this ship became a
standing example among
the philosophers, for the
logical question of things
that grow; one side holding
that the ship remained the
same, and the other
contending that it was not
the same.

P.283

[14]

[15]

Universal Decimal
Classification: hierarchy

World City by Le
Corbusier & Jeanneret

Paul Otlet personal
papers. Picture taken
during a Mondotheque
visit of the Mundaneum
archives, 11 September
2015

The face of the earth
Alimentation. — La base
would be much altered if de notre alimentation
repose en principe sur un
brick architecture were
trépied. 1° Protides
ousted everywhere by
glass architecture. It would (viandes, azotes). 2°
be as if the earth were
Glycides (légumineux,
hydrates de carbone). 3°
adorned with sparkling
jewels and enamels. Such Lipides (graisses). Mais il
glory is unimagmable. We faut encore pour présider
should then have a
au cycle de la vie et en
paradise on earth, and no assurer la régularité, des
need to watch in longing vitamines : c’est à elles
qu’est due la croissance
expectation for the
paradise in heaven.
des jeunes, l’équilibre
nutritif des adultes et une
certaine jeunesse chez les
vieillards.
[16]

[17]

[18]

[19]

Traité de documentation - Inverted pyramid and floor Architectural vision of the Section by Stanislas
La pyramide des
plan by Stanislas Jasinski Mundaneum by M.C.
Jasinski
bibliographies
Heymans

Le Corbusier, Musée
Mondial (1929), FLC,
doc nr. 24510

Le reseau Mundaneum.
From Paul Otlet,
Encylcopaedia Universalis
Mundaneum

[20]

Paul Otlet, Mundaneum.
Documentatio Partes.
MDN, EUM, doc nr.
8506, scan nr.
Mundaneum_A400176

P.284

Les
Pyramides

Metro Place Rogier in
2008

Paul Otlet, Atlas Monde
(1936). MDN, AFF,
scan nr.
Mundaneum_032;
Mundaneum_034;
Mundaneum_036;
Mundaneum_038;
Mundaneum_040;
Mundaneum_042;
Mundaneum_044;
Mundaneum_046;
Mundaneum_049 (sic!)

[21]

The “Sacrarium,” is
See Cross-readings,
Place Rogier, Brussels
something like a temple of Rayward, Warden Boyd around 2005
ethics, philosophy, and
(who translated and
religion. A great globe,
adapted), Mundaneum:
modeled and colored, in a Archives of Knowledge,
scale 1 = 1,000,000 with Urbana-Campaign, Ill. :
the planetarium inside, is Graduate School of
Library and Information
situated in front of the
museum building.
Science, University of
Illinois at UrbanaChampaign, 2010.
Original: Charlotte
Dubray et al.,
Mundaneum: Les
Archives de la
Connaissance, Bruxelles:
Les Impressions
Nouvelles, 2008. (p. 37)

Paul Otlet, Le Monde en son ensemble
(1936). Mundaneum, Mons. MDN,
AFF, scan nr.
MUND-00009061_2008_0001_MA

[22]

Place Rogier, Brussels
with sign "Pyramides"

P.285

[23]

Toute la Documentation. Logo
A late sketch from 1937 of the Mundaneum
showing all the complexity
of the pyramid of
documentation. An
evolutionary element
works its way up, and in
the conclusive level one
can read a synthesis:
"Homo Loquens, Homo
Scribens, Societas
Documentalis".

SOURCES
Last
Revision:
1·08·2016

1. https://en.wikipedia.org/wiki/Pyramid
2. Paul Otlet, L’Éducation et les Instituts du Palais Mondial (Mundaneum). Bruxelles: Union des Associations Internationales,
1926, p. 10. ("A great pyramid should be constructed. At the top are to be found Thinkers, Sociologists and great Artists. But
the top must be joined to the base where the masses are found, and the bases must have control of a path to the top.")
3. Wouter Van Acker. "Architectural Metaphors of Knowledge: The Mundaneum Designs of Maurice Heymans, Paul Otlet,
and Le Corbusier." Library Trends 61, no. 2 (2012): 371-396. http://muse.jhu.edu/
4. Photo: Roel de Groof http://www.zita.be/foto/roel-de-groof/allerlei/graf-paul-otlet/
5. Wouter Van Acker, 'Opening the Shrine of the Mundaneum The Positivist Spirit in the Architecture of Le Corbusier and his
Belgian “Idolators,”' in Proceedings of the Society of Architectural Historians, Australia and New Zealand: 30, Open, edited
by Alexandra Brown and Andrew Leach (Gold Coast,Qld: SAHANZ, 2013), vol. 2, p. 792.
6. Wouter Van Acker. "Architectural Metaphors of Knowledge: The Mundaneum Designs of Maurice Heymans, Paul Otlet,
and Le Corbusier." Library Trends 61, no. 2 (2012): 371-396.
7. Wouter Van Acker. "Architectural Metaphors of Knowledge: The Mundaneum Designs of Maurice Heymans, Paul Otlet,
and Le Corbusier." Library Trends 61, no. 2 (2012): 371-396.
8. Wouter Van Acker. "Architectural Metaphors of Knowledge: The Mundaneum Designs of Maurice Heymans, Paul Otlet,
and Le Corbusier." Library Trends 61, no. 2 (2012): 371-396. http://muse.jhu.edu/
9. Paul Otlet, Traité de documentation: le livre sur le livre, théorie et pratique (Bruxelles: Editiones Mundaneum, 1934), 420.
10. http://www.fondationlecorbusier.fr
11. http://www.numeriques.be
12. http://www.numeriques.be

13. Rayward, Warden Boyd, The Universe of Information: the Work of Paul Otlet for Documentation and international
Organization, FID Publication 520, Moscow, International Federation for Documentation by the All-Union Institute for
Scientific and Technical Information (Viniti), 1975. (p. 352)
14. The Man Who Wanted to Classify the World
15. Rayward, Warden Boyd (who translated and adapted), Mundaneum: Archives of Knowledge, Urbana-Campaign, Ill. :
Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign, 2010, p. 35. Original:
Charlotte Dubray et al., Mundaneum: Les Archives de la Connaissance, Bruxelles: Les Impressions Nouvelles, 2008.
16. Paul Otlet, Traité de documentation: le livre sur le livre, théorie et pratique (Bruxelles: Editiones Mundaneum, 1934).
17. Wouter Van Acker, 'Opening the Shrine of the Mundaneum The Positivist Spirit in the Architecture of Le Corbusier and his
Belgian “Idolators,”' in Proceedings of the Society of Architectural Historians, Australia and New Zealand: 30, Open, edited
by Alexandra Brown and Andrew Leach (Gold Coast,Qld: SAHANZ, 2013), vol. 2, p. 804.
18. Wouter Van Acker, 'Opening the Shrine of the Mundaneum The Positivist Spirit in the Architecture of Le Corbusier and his
Belgian “Idolators,”' in Proceedings of the Society of Architectural Historians, Australia and New Zealand: 30, Open, edited
by Alexandra Brown and Andrew Leach (Gold Coast,Qld: SAHANZ, 2013), vol. 2, p. 803.
19. Wouter Van Acker, 'Opening the Shrine of the Mundaneum The Positivist Spirit in the Architecture of Le Corbusier and his
Belgian “Idolators,”' in Proceedings of the Society of Architectural Historians, Australia and New Zealand: 30, Open, edited
by Alexandra Brown and Andrew Leach (Gold Coast,Qld: SAHANZ, 2013), vol. 2, p. 804.
20. From Van Acker, Wouter, “Internationalist Utopias of Visual Education. The Graphic and Scenographic Transformation of
the Universal Encyclopaedia in the Work of Paul Otlet, Patrick Geddes, and Otto Neurath,” in Perspectives on Science,
Vol.19, nr.1, 2011, p. 72. http://staging01.muse.jhu.edu/journals/perspectives_on_science/v019/19.1.van-acker.html
21. https://ideals.illinois.edu/bitstream/handle/2142/15431/Rayward_215_WEB.pdf?sequence=2
22. http://www.sonuma.com/archive/la-conservation-des-archives-du-mundaneum
23. Mundaneum Archives, Mons

P.286

P.287

Transclusionism
This page documents some of the contraptions at work in the Mondotheque
wiki. The name "transclusionism" refers to the term "transclusion" coined by
utopian systems humanist Ted Nelson and used in Mediawiki to refer to
inclusion of the same piece of text in between different pages.
HOW TO TRANSCLUDE LABELLED SECTIONS BETWEEN
TEXTS:

To create transclusions between different texts, you need to select a section of text that will
form a connection between the pages, based on a common subject:
• Think of a category that is the common ground for the link. For example if two texts
refer to a similar issue or specific concept (eg. 'rawdata'), formulate it without
spaces or using underscores (eg. 'raw_data', not 'raw data' );
• Edit the two or more pages which you want to link, adding {{RT|rawdata}}
before the text section, and end=rawdata /> at the end (take care of the closing '/>' );
• All text sections in other wiki pages that are marked up through the same common
ground, will be transcluded in the margin of the text.
HOW IT SHOWS UP:

For example, this is how a transclusion from a labelled section of the Xanadu article appears:

From Xanadu:
Every document can contain links of
any type including virtual copies
("transclusions") to any other
document in the system accessible to

its owner.

HOW IT WORKS:

The
code is used by the 'Labeled Section Transclusion' extension, which
looks for the tagged sections in a text, to transclude them into another text based on the
assigned labels.
The {{RT|rawdata}} instead, creates the side links by transcluding the
Template:RT page, substituting the word rawdata in its internal code, in place of
{{{1}}}. This is the commented content of Template:RT:
# Puts the trancluded sections in its own div:

# Searches semantically for all the pages in the
# requested category, puts them in an
array:
{{#ask:
[[Category:{{{1}}}]]|format=array | name=results
}}
# Starts a loop, going from 0 to the amount of pages
# in the array:
{{#loop: looper
| 0
| {{#arraysize: results}}
# If the pagename of the current element of the array
# is the same as the page calling the loop, it will skip
# the page:
| {{#ifeq: {{FULLPAGENAME:
{{#arrayindex: results | {{#var:looper}} }}
}}
|
{{FULLPAGENAME}}
|
|
{{#lst:
# Otherwise it searches through the current page in the
# loop, for all the occurrences of labeled sections:
{{#arrayindex: results | {{#var:looper}} }}
| {{{1}}}
}}
# Adds a link to the current page in loop:
([[{{#arrayindex: results | {{#var:looper}} }}]])
# Adds some space after the page:

P.288

P.289




# End of pagename if statement:
}}
# End of loop:
}}
# Closes div:

# Adds the page to the label category:
[[category:{{{1}}}]]
NECESSAIRE

Currently, on top of MediaWiki and SemanticMediaWiki, the following extensions needed
to be installed for the contraption to work:
• Labeled Section Transclusion to be able to select specific sections of the texts and make
connections between them;
• Parser Functions to be able to operate statements like
if
in the wiki pseudo-language;
• Arrays to create lists of objects, for example as a result of semantic queries;
• Loops to loop between the arrays above;
• Variables as it's needed by some of the above.
Last
Revision:
2·08·2016

Reading
list
Cross-readings. Not a bibliography.
PAUL OTLET
• Paul Otlet, L’afrique aux noirs, Bruxelles: Ferdinand Larcier,
1888.
• Paul Otlet, L’Éducation et les Instituts du Palais Mondial
(Mundaneum). Bruxelles: Union des Associations
Internationales, 1926.
• Paul Otlet, Cité mondiale. Geneva: World civic center:
Mundaneum. Bruxelles: Union des Associations
Internationales, 1929.
• Paul Otlet, Traité de documentation, Bruxelles, Mundaneum,
Palais Mondial, 1934.
• Paul Otlet, Monde: essai d'universalisme - Connaissance du
Monde, Sentiment du Monde, Action organisee et Plan du
Monde, Bruxelles: Editiones Mundeum 1935. See also:
http://www.laetusinpraesens.org/uia/docs/otlet_contents.php
• Paul Otlet, Plan belgique; essai d'un plan général, économique,
social, culturel. Plan d'urbanisation national. Liaison avec le
plan mondial. Conditions. Problèmes. Solutions. Réformes,
Bruxelles: Éditiones Mundaneum, 1935.

RE-READING OTLET

Or, reading the readers that explored and contextualized the work of Otlet in recent times.
• Jacques Gillen, Stéphanie Manfroid, and Raphaèle Cornille
(eds.), Paul Otlet, fondateur du Mundaneum (1868-1944).

P.290

P.291

Architecte du savoir, Artisan de paix, Mons: Éditions Les
Impressions Nouvelles, 2010.
• Françoise Levie, L’homme qui voulait classer le monde. Paul
Otlet et le Mundaneum, Bruxelles: Les Impressions Nouvelles,
2006.
• Warden Boyd Rayward, The Universe of Information: the
Work of Paul Otlet for Documentation and international
Organization, FID Publication 520, Moscow: International
Federation for Documentation by the All-Union Institute for
Scientific and Technical Information (Viniti), 1975.
• Warden Boyd Rayward, Universum informastsii Zhizn' i
deiatl' nost' Polia Otle, Trans. R.S. Giliarevesky, Moscow:
VINITI, 1976.
• Warden Boyd Rayward (ed.), International Organization and
Dissemination of Knowledge: Selected Essays of Paul Otlet,
Amsterdam: Elsevier, 1990.
• Warden Boyd Rayward, El Universo de la Documentacion: la
obra de Paul Otlet sobra documentacion y organizacion
internacional, Trans. Pilar Arnau Rived, Madrid: Mundanau,
2005.
• Warden Boyd Raywar, "Visions of Xanadu: Paul Otlet
(1868-1944) and Hypertext." Journal of the American
Society for Information Science (1986-1998) 45, no. 4 (05,
1994): 235-251.
• Warden Boyd Rayward (who translated and adapted),
Mundaneum: Archives of Knowledge, Urbana-Campaign, Ill. :
Graduate School of Library and Information Science,
University of Illinois at Urbana-Champaign, 2010. Original:
Charlotte Dubray et al., Mundaneum: Les Archives de la
Connaissance, Bruxelles: Les Impressions Nouvelles, 2008.
• Wouter Van Acker,[http://staging01.muse.jhu.edu/journals/
perspectives_on_science/v019/19.1.van-acker.html
“Internationalist Utopias of Visual Education. The Graphic
and Scenographic Transformation of the Universal
Encyclopaedia in the Work of Paul Otlet, Patrick Geddes,

and Otto Neurath” in Perspectives on Science, Vol.19, nr.1,
2011, p. 32-80.
• Wouter Van Acker, “Universalism as Utopia. A Historical
Study of the Schemes and Schemas of Paul Otlet
(1868-1944)”, Unpublished PhD Dissertation, University
Press, Zelzate, 2011.
• Theater Adhoc, The humor and tragedy of completeness,
2005.

FATHERS OF THE INTERNET

Constructing a posthumous pre-history of contemporary networking technologies.
• Christophe Lejeune, Ce que l’annuaire fait à Internet Sociologie des épreuves documentaires, in Cahiers dela
documentation – Bladen voor documentatie – 2006/3.
• Paul Dourish and Genevieve Bell, Divining a Digital Future,
Chicago: MIT Press 2011.
• John Johnston, The Allure of Machinic Life: Cybernetics,
Artificial Life, and the New AI, Chicago: MIT Press 2008.
• Charles van den Heuvel Building society, constructing
knowledge, weaving the web, in Boyd Rayward [ed.]
European Modernism and the Information Society, London:
Ashgate Publishers 2008, chapter 7 pp. 127-153.
• Tim Berners-Lee, James Hendler, Ora Lassila, The Semantic
Web, in Scientific American - SCI AMER , vol. 284, no. 5,
pp. 34-43, 2001.
• Alex Wright, Cataloging the World: Paul Otlet and the Birth
of the Information Age, Oxford University Press, 2014.
• Popova, Maria, “The Birth of the Information Age: How Paul
Otlet’s Vision for Cataloging and Connecting Humanity
Shaped Our World”, Brain Pickings, 2014.
• Heuvel, Charles van den, “Building Society, Constructing
Knowledge, Weaving the Web”. in European Modernism and

P.292

P.293

the Information Society – Informing the Present,
Understanding the Past, Aldershot, 2008, pp. 127–153.

CLASSIFYING THE WORLD

The recurring tensions between the world and its systematic representation.
• ShinJoung Yeo, James R. Jacobs, Diversity matters?
Rethinking diversity in libraries, Radical Reference
Countepoise 9 (2) Spring, 2006. p. 5-8.
• Thomas Hapke, Wilhelm Ostwald's Combinatorics as a Link
between In-formation and Form, in Library Trends, Volume
61, Number 2, Fall 2012.
• Nancy Cartwright, Jordi Cat, Lola Fleck, Thomas E. Uebel,
Otto Neurath: Philosophy Between Science and Politics.
Cambridge University Press, 2008.
• Nathan Ensmenger, The Computer Boys Take Over:
Computers, Programmers, and the Politics of Technical
Expertise. MIT Press, 2010.
• Ronald E. Day, The Modern Invention of Information:
Discourse, History, and Power, Southern Illinois University
Press, 2001.
• Markus Krajewski, Peter Krapp Paper Machines: About
Cards & Catalogs, 1548-1929 The MIT Press
• Eric de Groller A Study of general categories applicable to
classification and coding in documentation; Documentation and
terminology of science; 1962.
• Marlene Manoff, "Theories of the archive from across the
disciplines," in portal: Libraries and the Academy, Vol. 4, No.
1 (2004), pp. 9–25.
• Charles van den Heuvel, W. Boyd Rayward, Facing
Interfaces: Paul Otlet's Visualizations of Data Integration.

Journal of the American society for information science and
technology (2011).

DON'T BE EVIL

Standing on the hands of Internet giants.
• Rene Koenig, Miriam Rasch (eds), Society of the Query
Reader: Reflections on Web Search, Amsterdam: Institute of
Network Cultures, 2014.
• Matthew Fuller, Andrew Goffey, Evil Media. Cambridge,
Mass., United States: MIT Press, 2012.
• Steve Levy In The Plex. Simon & Schuster, 2011.
• Dan Schiller, ShinJoung Yeo, Powered By Google: Widening
Access and Tightening Corporate Control in: Red Art: New
Utopias in Data Capitalism, Leonardo Electronic Almanac,
Volume 20 Issue 1 (2015).
• Invisible Committee, Fuck Off Google, 2014.
• Dave Eggers, The Circle. Knopf, 2014.
• Matteo Pasquinelli, Google’s PageRank Algorithm: A
Diagram of the Cognitive Capitalism and the Rentier of the
Common Intellect. In: Konrad Becker, Felix Stalder
(eds), Deep Search, London: Transaction Publishers: 2009.
• Joris van Hoboken, Search Engine Freedom: On the
Implications of the Right to Freedom of Expression for the

P.294

P.295

Legal Governance of Web Search Engines. Kluwer Law
International, 2012.
• Wendy Hui Kyong Chun, Control and Freedom: Power and
Paranoia in the Age of Fiber Optics. The MIT Press, 2008.
• Siva Vaidhyanathan, The Googlization of Everything (And
Why We Should Worry). University of California Press.
2011.
• William Miller, Living With Google. In: Journal of Library
Administration Volume 47, Issue 1-2, 2008.
• Lawrence Page, Sergey Brin The Anatomy of a Large-Scale
Hypertextual Web Search Engine. Computer Networks, vol.
30 (1998), pp. 107-117.
• Ken Auletta Googled: The end of the world as we know it.
Penguin Press, 2009.

EMBEDDED HIERARCHIES

How classification systems, and the dream of their universal application actually operate.
• Paul Otlet, Traité de documentation, Bruxelles, Mundaneum,
Palais Mondial, 1934. (for alphabet hierarchy, see page 71)
• Paul Otlet, L’afrique aux noirs, Bruxelles: Ferdinand Larcier,
1888.
• Judy Wajcman, Feminism Confronts Technology, University
Park, Pa: Pennsylvania State University Press, 1991.
• Judge, Anthony, “Union of International Associations – Virtual
Organization – Paul Otlet's 100-year Hypertext
Conundrum?”, 2001.
• Ducheyne, Steffen, “Paul Otlet's Theory of Knowledge and
Linguistic Objectivism”, in Knowledge Organization, no 32,
2005, pp. 110–116.

ARCHITECTURAL VISIONS

Writings on how Otlet's knowledge site was successively imagined and visualized on grand
architectural scales.
• Catherine Courtiau, "La cité internationale 1927-1931," in
Transnational Associations, 5/1987: 255-266.
• Giuliano Gresleri and Dario Matteoni. La Città Mondiale:
Andersen, Hébrard, Otlet, Le Corbusier. Venezia: Marsilio,
1982.
• Isabelle Rieusset-Lemarie, "P. Otlet's Mundaneum and the
International Perspective in the History of Documentation and
Information science," in Journal of the American Society for
Information Science (1986-1998)48.4 (Apr 1997):
301-309.
• Le Corbusier, Vers une Architecture, Paris: les éditions G.
Crès, 1923.
• Transnational Associations, "Otlet et Le Corbusier" 1927-31,
INGO Development Projects: Quantity or Quality, Issue No:
5, 1987.
• Wouter Van Acker. "Hubris or utopia? Megalomania and
imagination in the work of Paul Otlet," in Cahiers de la
documentation – Bladen voor documentatie – 2012/2,
58-66.
• Wouter Van Acker. "Architectural Metaphors of Knowledge:
The Mundaneum Designs of Maurice Heymans, Paul Otlet,
and Le Corbusier." Library Trends 61, no. 2 (2012):
371-396.
• Van Acker, Wouter, Somsen, Geert, “A Tale of Two World
Capitals – the Internationalisms of Pieter Eijkman and Paul
Otlet”, in Revue Belge de Philologie et d'Histoire/Belgisch
Tijdschrift voor Filologie en Geschiedenis, Vol. 90, nr.4,
2012.
• Wouter Van Acker, "Opening the Shrine of the Mundaneum
The Positivist Spirit in the Architecture of Le Corbusier and his
Belgian “Idolators”, in Proceedings of the Society of

P.296

P.297

Architectural Historians, Australia and New Zealand: 30,
Open, edited by Alexandra Brown and Andrew Leach (Gold
Coast,Qld: SAHANZ, 2013), vol. 2, 791-805.
• Anthony Vidler, “The Space of History: Modern Museums
from Patrick Geddes to Le Corbusier,” in The Architecture of
the Museum: Symbolic Structures, Urban Contexts, ed.
Michaela Giebelhausen (Manchester; New York: Manchester
University Press, 2003).
• Volker Welter. "Biopolis Patrick Geddes and the City of
Life." Cambridge, Mass: MIT, 2003.
• Alfred Willis, “The Exoteric and Esoteric Functions of Le
Corbusier’s Mundaneum,” Modulus/University of Virginia
School of Architecture Review 12, no. 21 (1980).

ZEITGEIST

It includes both century-old sources and more recent ones on the parallel or entangled
movements around the Mundaneum time.
• Hendrik Christian Andersen and Ernest M. Hébrard.
Création d'un Centre mondial de communication. Paris, 1913.
• Julie Carlier, "Moving beyond Boundaries: An Entangled
History of Feminism in Belgium, 1890–1914," Ph.D.
dissertation, Universiteit Gent, 2010. (esp. 439-458.)
• Bambi Ceuppens, Congo made in Flanders?: koloniale
Vlaamse visies op "blank" en "zwart" in Belgisch Congo.
[Gent]: Academia Press, 2004.
• Conseil International des Femmes (International Council of
Women), Office Central de Documentation pour les Questions
Concernant la Femme. Rapport. Bruxelles : Office Central de
Documentation Féminine, 1909.
• Sandi E. Cooper, Patriotic pacifism waging war on war in
Europe, 1815-1914. New York: Oxford University Press,
1991.
• Sylvie Fayet-Scribe, "Women Professionals in Documentation
in France during the 1930s," Libraries & the Cultural Record

Vol. 44, No. 2, Women Pioneers in the Information Sciences
Part I, 1900-1950 (2009), pp. 201-219. (translated by
Michael Buckland)
• François Garas, Mes temples. Paris: Michalon, 1907.
• Madeleine Herren, Hintertüren zur Macht: Internationalismus
und modernisierungsorientierte Aussenpolitik in Belgien, der
Schweiz und den USA 1865-1914. München: Oldenbourg,
2000.
• Robert Hoozee and Mary Anne Stevens, Impressionism to
Symbolism: The Belgian Avant-Garde 1880-1900, London:
Royal Academy of Arts, 1994.
• Markus Krajewski, Die Brücke: A German contemporary of
the Institut International de Bibliographie. In: Cahiers de la
documentation / Bladen voor documentatie 66.2 (Juin,
Numéro Spécial 2012), 25–31.
• Daniel Laqua, "Transnational intellectual cooperation, the
League of Nations, and the problem of order," in Journal of
Global History (2011) 6, pp. 223–247.
• Lewis Pyenson and Christophe Verbruggen, "Ego and the
International: The Modernist Circle of George Sarton," Isis,
Vol. 100, No. 1 (March 2009), pp. 60-78.
• Elisée Reclus, Nouvelle géographie universelle; la terre et les
hommes, Paris, Hachette et cie., 1876-94.
• Edouard Schuré, Les grands initiés: esquisse de l'histoire
secrète des religions, 1889.
• Rayward, Warden Boyd (ed.), European Modernism and the
Information Society: Informing the Present, Understanding the
Past. Aldershot, Hants, England: Ashgate, 2008.
• Van Acker, Wouter, “Internationalist Utopias of Visual
Education. The Graphic and Scenographic Transformation of
the Universal Encyclopaedia in the Work of Paul Otlet,

P.298

P.299

Patrick Geddes, and Otto Neurath”, in Perspectives on
Science, Vol.19, nr.1, 2011, p. 32-80.
• Nader Vossoughian, "The Language of the World Museum:
Otto Neurath, Paul Otlet, Le Corbusier", Transnational
Associations 1-2 (January-June 2003), Brussels, pp 82-93.
• Alfred Willis, “The Exoteric and Esoteric Functions of Le
Corbusier’s Mundaneum,” Modulus/University of Virginia
School of Architecture Review 12, no. 21 (1980).
Last
Revision:
2·08·2016

Colophon/
Colofon
• Mondotheque editorial team/redactie team/équipe éditoriale: André Castro,
Sînziana Păltineanu, Dennis Pohl, Dick Reckard, Natacha
Roussel, Femke Snelting, Alexia de Visscher
• Copy-editing/tekstredactie/édition EN: Sophie Burm (Amateur Librarian, The
Smart City - City of Knowledge, X=Y, A Book of the Web), Liz Soltan (An
experimental transcript)
• Translations EN-FR/vertalingen EN-FR/traductions EN-FR: Eva Lena
Vermeersch (Amateur Librarian, A Pre-emptive History of the Google Cultural
Institute, The Smart City - City of Knowledge), Natacha Roussel (LES
UTOPISTES and their common logos, Introduction), Donatella
Portoghese
• Translations EN-NL/vertalingen EN-FR/traductions EN-NL: Femke
Snelting, Peter Westenberg
• Transcriptions/transcripties/transcriptions: Lola Durt, Femke Snelting,
Tom van den Wijngaert
• Design and development/ontwerp en ontwikkeling/graphisme et développement:
Alexia de Visscher, André Castro
• Fonts/lettertypes/polices: NotCourierSans, Cheltenham, Traité facsimile
• Tools/gereedschappen/outils: Semantic Mediawiki, etherpad,
Weasyprint, html5lib, mwclient, phantomjs, gnu make ...
• Source-files/bronbestanden/code source: https://gitlab.com/Mondotheque/
RadiatedBook + http://www.mondotheque.be
• Published by/een publicatie van/publié par: Constant (2016)
• Printed at/druk/imprimé par: Online-Druck.biz
• License/licentie/licence: Texts and images developed by Mondotheque are available
under a Free Art License 1.3 (C) Copyleft Attitude, 2007. You may copy,
distribute and modify them according to the terms of the Free Art License: http://
artlibre.org Texts and images by Paul Otlet and Henri Lafontaine are in the Public
Domain. Other materials copyright by the authors/Teksten en afbeeldingen
ontwikkeld door Mondotheque zijn beschikbaar onder een Free Art License 1.3 (C)
Copyleft Attitude, 2007. U kunt ze dus kopiëren, verspreiden en wijzigen volgens de
voorwaarden van de Free Art License: http://artlibre.org Teksten en beelden van
Paul Otlet en Henri Lafontaine zijn in het publieke domein. Andere materialen:
auteursrecht bij de auteurs/Les textes et images développées par Mondotheque sont

P.300

P.301

disponibles sous licence Art Libre 1.3 (C) Copyleft Attitude 2007. Vous pouvez
les copier, distribuer et modifier selon les termes de la Licence Art Libre: http://
artlibre.org Les textes et les images de Paul Otlet et Henri Lafontaine sont dans le
domaine public. Les autres matériaux sont assujettis aux droits d'auteur choisis par
les auteurs.
• ISBN: 9789081145954
Thank you/bedankt/merci: the contributors/de auteurs/les contributeurs, Yves Bernard,
Michel Cleempoel, Raphaèle Cornille, Jan Gerber, Marc d'Hoore, Églantine Lebacq,
Nicolas Malevé, Stéphanie Manfroid, Robert M. Ochshorn, An Mertens, Dries Moreels,
Sylvia Van Peteghem, Jara Rocha, Roel Roscam Abbing.
Mondotheque is supported by/wordt ondersteund door/est soutenu par: De Vlaamse
GemeenschapsCommissie, Akademie Schloss Solitude.
Last
Revision:
2·08·2016


tactics in Dockray, Pasquinelli, Smith & Waldorf 2010


Dockray, Pasquinelli, Smith & Waldorf
There is Nothing Less Passive than the Act of Fleeing
2010


# There is Nothing Less Passive than the Act of Fleeing

[The Public School](/web/20170523052416/http://journalment.org/author/public-
school)

What follows is a condensed and edited version of a text for a panel that was
presented at UCIRA’s _Future Tense: Alternative Arts and Economies in the
University_  conference held in San Diego, California on November 18, 2010.
The panel shared the same name as a 13-day itinerant seminar in Berlin
organized by Dockray, Waldorf, and Fiona Whitton earlier that year, in July.
The seminar began with an excerpt from Tiqqun’s _Introduction to Civil War_ ,
which was co-translated into English by Smith; and later read a chapter from
Pasquinelli’s _Animal Spirits: A Bestiary of the Commons_. Both authors have
also participated in meetings at The Public School in Los Angeles and Berlin.
Both the panel and the seminar developed out of longer conversations at The
Public School in Los Angeles, which began in late 2007 under Telic Arts
Exchange. The Public School is a school with no curriculum, where classes are
proposed and organized by the public.


## The Education Factory

The University as I understand it, has been a threshold between youth and the
labor market. Or it has been a threshold between a general education and a
more specialized one. In its more progressive form, it’s been a zone of
transition into an expanding middle class. But does this form still exist? I’m
inclined to think just the opposite, that the University is becoming a mean
for filtering people out of the middle class via student loan debt, which now
exceeds credit card debt. The point of the questions for me is simply what is
the point of the University? What are we fighting for or defending?

The next question might be, do students work? The University is a crucial site
in the reproduction of class relations; we know that students are consumers;
we know the student is a future worker who will be compelled to work, and work
in a specific way, because she/he is crushed by debt contracted during her/his
tenure as a student; we know that students work while attending school, and
that for many students school and work eerily begin to resemble one another.
But asking whether students work is to ask something more specific: do
students produce value and, therefore surplus-value? If we can assume, for the
moment, that students are a factor in the “knowledge production” that takes
place in the University, is this production of knowledge also the production
of value? We confront, maybe, a paradox: all social activity has become
“productive”—captured, absorbed—at the very moment value becomes unmeasurable.

What does this have to do with students, and their work? The thesis of the
social factory was supplemented by the assumption that knowledge had become a
central mode in the production of value in post-Fordist environments. Wouldn’t
this mean that the university could become an increasingly important
flashpoint in social struggles, now that it has become not simply the site of
the reproduction of the capital relation, but involved in the immediate
production process, directly productive of value? Would we have to understand
students themselves as, if not knowledge producers, an irreplaceable moment or
function within that process? None of this remains clear. The question is not
only a sociological one, it is also a political one. The strategy of
reconceptualizing students as workers is rooted in the classical Marxist
identification of revolt with the point of production, that is, exploitation.
To declare all social activity to be productive is another way of saying that
social war can be triggered at any site within society, even among the
precarious, the unemployed, and students.

_Knowledge is tied to struggle. To truly know is to hate truly. This is why
the working class can know and possess everything of capital, as it is enemy
to itself as capital._
—Tronti, 1966

That form of “hate” mentioned by Tronti is suggesting something interesting
form of political passion and a new modus operandi. The relation between hate
and knowledge, suggested by Tronti, is the opposite of the cynical detachment
of the new social figure of the entrepreneur-artist but it’s a joyful hate of
our condition. In order to educate ourselves we should hate our very own
environment and social network in which we were educated—the university. The
position of the artist in their work and the performance of themselves (often
no different) can take are manyfold. There are histories for all of these
postures that can be referenced and adopted. They are all acceptable tactics
as long as we keep doing and churning out more. But where does this get us,
both within the confines of the arts and the larger social structure? We are
taught that the artist is always working, thinking, observing. We have learned
the tricks of communication, performance and adaptability. We can go anywhere,
react to anything, respond in a thoughtful and creative way to all problems.
And we do this because while there is opportunity, we should take it. “We
shouldn’t complain, others have it much worse.” But it doesn’t mean that we
shouldn’t imagine something else. To begin thinking this way, it means a
refusal to deliver an event, to perform on demand. Maybe we need a kind of
inflexibility, of obstruction, of non-conductivity. After all, what exactly
are we producing and performing for? Can we try to think about these talents
of performance, of communication? If so, could this be the basis for an
intimacy, a friendship… another institution?


## Alternative pedagogical models

Let’s consider briefly the desire for “new pedagogical models” and “new forms
of knowledge production”. When articulated by the University, this simply
means new forms of instruction and new forms of research. Liberal faculty and
neoliberal politicians or administrators find themselves joined in this hunt
for future models and forms. On the one hand, faculty imagines that these new
techniques can provide space for continuing the good. On the other hand,
investors, politicians, and administrators look for any means to make the
University profitable; use unpaid labour, eliminate non-productive physical
spaces, and create new markets. Symptomatically, there is very little
resistance to this search for new forms and new models for the simple reason
that there is a consensus that the University should and will continue.

It’s also important to note that many of the so-called new forms and new
models being considered lie beyond the walls and payroll of the institution,
therefore both low-cost and low-risk. It is now a familiar story: the
institution attempts to renew itself by importing its own critique. The Public
School is not a new model and it’s not going to save the University. It is not
even a critique of the University any more or less than it is a critique of
the field of art or of capitalist society. It is not “the next university”
because it is a practice of leaving the University to the side. It would be a
mistake to think that this means isolation or total detachment.

Today, the forms of university governance cannot allow themselves to uproot
self-education. To the contrary, self-education constitutes a vital sap for
the survival of the institutional ruins, snatched up and rendered valuable in
the form of revenue. Governance is the trap, hasty and flexible, of the
common. Instead of countering us frontally, the enemy follows us. We must
immediately reject any weak interpretation of the theme of autonomous
institutions, according to which the institution is a self-governed structure
that lives between the folds of capitalism, without excessively bothering it.
The institutionalisation of self-education doesn’t mean being recognized as
one actor among many within the education market, but the capacity to organize
living knowledge’s autonomy and resistance.

One of the most important “new pedagogical models” that emerged over the past
year in the struggles around the implosion of the “public” university are the
occupations that took place in the Fall of 2009. Unlike other forms of action,
which tend to follow the timetable and cadence of the administration, to the
point of mirroring it, these actions had their own temporality, their own
initiative, their own internal logic. They were not at all concerned with
saving a university that was already in ruins, but rather with creating a
space at the heart of the University within which something else, some future,
could be risked, elaborated, prefigured. Everything had to be improvised, from
moment to moment, and in these improvisations new knowledges were developed
and shared. This improvisation was demanded by the aleatory quality of the
types of relations that emerged within these spaces, relations no longer
regulated by the social alibis that assigns everyone her/his place. When
students occupy university buildings—here in California, in NYC, in Puerto
Rico, in Europe and the UK, everywhere—they do so not because they want to
save their universities. They do so because they know the university for what
it is, as something to be at once seized and abandoned. They know that they
can only rely on and learn from one another.


## The Common and The Public

What is really so disconcerting about this antinomy between the logic of the
common and the logic of the social or the public? For Jacotot, it means the
development of a communist politics that is neither reformist nor seditious2.
It proposes the formation of common spaces at a distance from—if not outside
of—the public sphere and its communicative reason: “whoever forsakes the
workings of the social machine has the opportunity to make the electrical
energy of the emancipation machine.”

What does it mean to forsake the social machine? That is the major political
question facing us today. Such a forsaking would require that our political
energies organize themselves around spaces of experimentation at a distance
not only from the university and what is likely its slow-motion, or sudden,
collapse, but also from an entire imaginary inherited from the workers
movement: the task of a future social emancipation and vectors and forms of
struggle such a task implies. Perhaps what is required is not to put off
equality for the future, but presuppose the common, to affirm that commons as
a fact, a given, which must nevertheless be verified, created, not by a social
body, not by a collective force, but a power of the common, now.

School is not University. Neither is it Academy or College or even Institute.
We are all familiar with the common meaning of the word: it is a place for
learning. In another sense, it also refers to organized education in general,
which is made most clear by the decision to leave, to “drop out of school”.
Alongside these two stable, almost architectural definitions, the word
gestures to composition and movement—the school of bodies, moving
independently, together; the school only exists as long as that collective
movement does. The school takes shape in this oscillation between form and
formlessness, not through the act of constructing a wall but by the process of
realizing its boundary through practice.

Perhaps this is a way to think of how to develop what Felix Guattari called
“the associative sector” in 1982: “everything that isn’t the state, or private
capital, or even cooperatives”3. At first gloss, the associative sector is
only a name for the remainder, the already outside; but, in the language of a
school, it is a constellation of relationships, affinities, new
subjectivities, and movements, flickering into existence through life and use,
An “engaged withdrawal” that simultaneously creates an exit and institutes in
the act of passing through. Which itself might bring us back to school, to the
Greek etymology of school, skhole, “a holding back”, a “keeping clear” of
space for reflective distance. On the one hand, perhaps this reflective space
simply allows theoretical knowledge to shape or affect performative action;
but on the other hand, the production of this “clearing” is not given,
certainly not now and certainly not by the institutions that claim to give it.
Reflective space is not the precondition for performative action. On the
contrary; performative action is the precondition for reflective space—or,
more appropriately, space and action must be coproduced.

Is the University even worth “saving”? We are right to respond with
indignation, or better, with an array of tactics—some procedural, some more
“direct”—against these incursions, which always seem to authorize themselves
by appeals to economic austerity, budget shortfalls, and tightened belts.
Perhaps what is being destroyed in this process is the very notion of the
public sphere itself, a notion that. It is easy to succumb to the illusion
that the only possible result of this destruction of the figure of the public
is privatization. But what if the figure of the public was to be set off
against not only the private and property relations, but against a figure of
the “common” as well? What if, in other words, the notion of the public has
always been an unstable, mediating term between privatization and
communization, and what if the withering of this mediation left these two
process openly at odds with each other? Perhaps, then, it is not simply a
question of saving a university and, more broadly, a public space that is
already withering away; maybe our energies and our intelligence, our
collective or common intellectual forces, should be devoted to organizing and
articulating just this sort of counter-transition, at a distance from the
public and the private.


## Authorship and new forms of knowledge

For decades we have spoken about the “death of the author”. The most sustained
critiques of authorship have been made from the spheres of art and education,
but not coincidentally, these spheres have the most invested in the notion.
Credit and accreditation are the mechanisms for attaching symbolic capital to
individuals via degrees and other lines on CVs. The curriculum vitæ is an
inverted credit report, evidence of underpaid work, kept orderly with an
expectation of some future return.

All of this work, this self-documentation, this fidelity between ourselves and
our papers, is for what, for whom? And what is the consequence of a world
where every person is armed with their vitæ, other than “the war of all
against all?” It’s that sensation that there are no teams but everyone has got
their own jersey.

The idea behind the project The Public School is to teach each other in a very
horizontal way. No curriculum, no hierarchy. But is The Public School able to
produce new knowledge and new content by itself? Can the The Public School
become a sort of autonomous collective author? Or, is The Public School just
about exchanges and social networking?

In the recent history of university struggles, some collectives started to
refresh the idea of coresearch; a form of knowledge that can produce new
subjectivities by researching. New subjectivities that produce new knowledge
and new knowledge that produces new subjectivities If knowledge comes only
from conflict, knowledge goes back to conflict in order to produce new
autonomy and subjectivities.

### The Public School

Sean Dockray, Matteo Pasquinelli, Jason Smith and Caleb Waldorf are founding
members of and collaborators at The Public School. Initiated in 2007 under
Telic Arts Exchange (literally in the basement) in Los Angeles, The Public
School is a school with no curriculum. At the moment, it operates as follows:
first, classes are proposed by the public; then, people have the opportunity
to sign up for the classes; finally, when enough people have expressed
interest, the school finds a teacher and offers the class to those who signed
up. The Public School is not accredited, it does not give out degrees, and it
has no affiliation with the public school system. It is a framework that
supports autodidactic activities, operating under the assumption that
everything is in everything. The Public School currently exists in Los
Angeles, New York, Berlin, Brussels, Helsinki, Philadelphia, Durham, San Juan,
and is still expanding.



tactics in Fuller & Dockray 2011


Fuller & Dockray
In the Paradise of Too Many Books An Interview with Sean Dockray
2011


# In the Paradise of Too Many Books: An Interview with Sean Dockray

By Matthew Fuller, 4 May 2011

[0 Comments](/editorial/articles/paradise-too-many-books-interview-sean-
dockray#comments_none) [9191 Reads](/editorial/articles/paradise-too-many-
books-interview-sean-dockray) Print

If the appetite to read comes with reading, then open text archive Aaaaarg.org
is a great place to stimulate and sate your hunger. Here, Matthew Fuller talks
to long-term observer Sean Dockray about the behaviour of text and
bibliophiles in a text-circulation network

Sean Dockray is an artist and a member of the organising group for the LA
branch of The Public School, a geographically distributed and online platform
for the self-organisation of learning.1 Since its initiation by Telic Arts, an
organisation which Sean directs, The Public School has also been taken up as a
model in a number of cities in the USA and Europe.2

We met to discuss the growing phenomenon of text-sharing. Aaaaarg.org has
developed over the last few years as a crucial site for the sharing and
discussion of texts drawn from cultural theory, politics, philosophy, art and
related areas. Part of this discussion is about the circulation of texts,
scanned and uploaded to other sites that it provides links to. Since
participants in The Public School often draw from the uploads to form readers
or anthologies for specific classes or events series, this project provides a
useful perspective from which to talk about the nature of text in the present
era.

**Sean Dockray** **:** People usually talk about three key actors in
discussions about publishing, which all play fairly understandable roles:
readers; publishers; and authors.

**Matthew Fuller:** Perhaps it could be said that Aaaaarg.org suggests some
other actors that are necessary for a real culture of text; firstly that books
also have some specific kind of activity to themselves, even if in many cases
it is only a latent quality, of storage, of lying in wait and, secondly, that
within the site, there is also this other kind of work done, that of the
public reception and digestion, the response to the texts, their milieu, which
involves other texts, but also systems and organisations, and platforms, such
as Aaaaarg.

![](/sites/www.metamute.org/files/u73/Roland_Barthes_web.jpg)

Image: A young Roland Barthes, with space on his bookshelf

**SD:** Where even the three actors aren't stable! The people that are using
the site are fulfilling some role that usually the publisher has been doing or
ought to be doing, like marketing or circulation.

**MF:** Well it needn't be seen as promotion necessarily. There's also this
kind of secondary work with critics, reviewers and so on - which we can say is
also taken on by universities, for instance, and reading groups, magazines,
reviews - that gives an additional life to the text or brings it particular
kinds of attention, certain kind of readerliness.

**SD:** Situates it within certain discourses, makes it intelligible in a way,
in a different way.

**MF:** Yes, exactly, there's this other category of life to the book, which
is that of the kind of milieu or the organisational structure in which it
circulates and the different kind of networks of reference that it implies and
generates. Then there's also the book itself, which has some kind of agency,
or at least resilience and salience, when you think about how certain books
have different life cycles of appearance and disappearance.

**SD:** Well, in a contemporary sense, you have something like _Nights of
Labour_ , by Ranci _è_ re - which is probably going to be republished or
reprinted imminently - but has been sort of invisible, out of print, until, by
surprise, it becomes much more visible within the art world or something.

**MF:** And it's also been interesting to see how the art world plays a role
in the reverberations of text which isn't the same as that in cultural theory
or philosophy. Certainly _Nights of Labour_ , something that is very close to
the role that cultural studies plays in the UK, but which (cultural studies)
has no real equivalent in France, so then, geographically and linguistically,
and therefore also in a certain sense conceptually, the life of a book
exhibits these weird delays and lags and accelerations, so that's a good
example. I'm interested in what role Aaaaarg plays in that kind of
proliferation, the kind of things that books do, where they go and how they
become manifest. So I think one of the things Aaaaarg does is to make books
active in different ways, to bring out a different kind of potential in
publishing.

**SD:** Yes, the debate has tended so far to get stuck in those three actors
because people tend to end up picking a pair and placing them in opposition to
one another, especially around intellectual property. The discussion is very
simplistic and ends up in that way, where it's the authors against readers, or
authors against their publishers, with the publishers often introducing
scarcity, where the authors don't want it to be - that's a common argument.
There's this situation where the record industry is suing its own audience.
That's typically the field now.

**MF:** So within that kind of discourse of these three figures, have there
been cases where you think it's valid that there needs to be some form of
scarcity in order for a publishing project to exist?

**SD:** It's obviously not for me to say that there does or doesn't need to be
scarcity but the scarcity that I think we're talking about functions in a
really specific way: it's usually within academic publishing, the book or
journal is being distributed to a few libraries and maybe 500 copies of it are
being printed, and then the price is something anywhere from $60 to $500, and
there's just sort of an assumption that the audience is very well defined and
stable and able to cope with that.

**MF:** Yeah, which recognises that the audiences may be stable as an
institutional form, but not that over time the individual parts of say that
library user population change in their relationship to the institution. If
you're a student for a few years and then you no longer have access, you lose
contact with that intellectual community...

**SD:** Then people just kind of have to cling to that intellectual community.
So when scarcity functions like that, I can't think of any reason why that
_needs_ to happen. Obviously it needs to happen in the sense that there's a
relatively stable balance that wants to perpetuate itself, but what you're
asking is something else.

**MF:** Well there are contexts where the publisher isn't within that academic
system of very high costs, sustained by volunteer labour by academics, the
classic peer review system, but if you think of more of a trade publisher like
a left or a movement or underground publisher, whose books are being
circulated on Aaaaarg...

**SD:** They're in a much more precarious position obviously than a university
press whose economics are quite different, and with the volunteer labour or
the authors are being subsidised by salary - you have to look at the entire
system rather than just the publication. But in a situation where the
publisher is much more precarious and relying on sales and a swing in one
direction or another makes them unable to pay the rent on a storage facility,
one can definitely see why some sort of predictability is helpful and
necessary.

**MF:** So that leads me to wonder whether there are models of publishing that
are emerging that work with online distribution, or with the kind of thing
that Aaaaarg does specifically. Are there particular kinds of publishing
initiatives that really work well in this kind of context where free digital
circulation is understood as an a priori, or is it always in this kind of
parasitic or cyclical relationship?

**SD:** I have no idea how well they work actually; I don't know how well,
say, Australian publisher re.press, works for example. 3 I like a lot of what
they publish, it's given visibility when re.press distributes it and that's a
lot of what a publisher's role seems to be (and what Aaaaarg does as well).
But are you asking how well it works in terms of economics?

**MF:** Well, just whether there's new forms of publishing emerging that work
well in this context that cut out some of the problems ?

**SD:** Well, there's also the blog. Certain academic discourses, philosophy
being one, that are carried out on blogs really work to a certain extent, in
that there is an immediacy to ideas, their reception and response. But there's
other problems, such as the way in which, over time, the posts quickly get
forgotten. In this sense, a publication, a book, is kind of nice. It
crystallises and stays around.

**MF:** That's what I'm thinking, that the book is a particular kind of thing
which has it's own quality as a form of media. I also wonder whether there
might be intermediate texts, unfinished texts, draft texts that might
circulate via Aaaaarg for instance or other systems. That, at least to me,
would be kind of unsatisfactory but might have some other kind of life and
readership to it. You know, as you say, the blog is a collection of relatively
occasional texts, or texts that are a work in progress, but something like
Aaaaarg perhaps depends upon texts that are finished, that are absolutely the
crystallisation of a particular thought.

![](/sites/www.metamute.org/files/u73/tree_of_knowledge_web.jpg)

Image: The Tree of Knowledge as imagined by Hans Sebald Beham in his 1543
engraving _Adam and Eve_

**SD:** Aaaaarg is definitely not a futuristic model. I mean, it occurs at a
specific time, which is while we're living in a situation where books exist
effectively as a limited edition. They can travel the world and reach certain
places, and yet the readership is greatly outpacing the spread and
availability of the books themselves. So there's a disjunction there, and
that's obviously why Aaaaarg is so popular. Because often there are maybe no
copies of a certain book within 400 miles of a person that's looking for it,
but then they can find it on that website, so while we're in that situation it
works.

**MF:** So it's partly based on a kind of asymmetry, that's spatial, that's
about the territories of publishers and distributors, and also a kind of
asymmetry of economics?

**SD:** Yeah, yeah. But others too. I remember when I was affiliated with a
university and I had JSTOR access and all these things and then I left my job
and then at some point not too long after that my proxy access expired and I
no longer had access to those articles which now would cost $30 a pop just to
even preview. That's obviously another asymmetry, even though, geographically
speaking, I'm in an identical position, just that my subject position has
shifted from affiliated to unaffiliated.

**MF:** There's also this interesting way in which Aaaaarg has gained
different constituencies globally, you can see the kind of shift in the texts
being put up. It seems to me anyway there are more texts coming from non-
western authors. This kind of asymmetry generates a flux. We're getting new
alliances between texts and you can see new bibliographies emerge.

**SD:** Yeah, the original community was very American and European and
gradually people were signing up at other places in order to have access to a
lot of these texts that didn't reach their libraries or their book stores or
whatever. But then there is a danger of US and European thought becoming
central. A globalisation where a certain mode of thought ends up just erasing
what's going on already in the cities where people are signing up, that's a
horrible possible future.

**MF:** But that's already something that's _not_ happening in some ways?

**SD:** Exactly, that's what seems to be happening now. It goes on to
translations that are being put up and then texts that are coming from outside
of the set of US and western authors and so, in a way, it flows back in the
other direction. This hasn't always been so visible, maybe it will begin to
happen some more. But think of the way people can list different texts
together as ‘issues' - a way that you can make arbitrary groupings - and
they're very subjective, you can make an issue named anything and just lump a
bunch of texts in there. But because, with each text, you can see what other
issues people have also put it in, it creates a trace of its use. You can see
that sometimes the issues are named after the reading groups, people are using
the issues format as a collecting tool, they might gather all Portuguese
translations, or The Public School uses them for classes. At other times it's
just one person organising their dissertation research but you see the wildly
different ways that one individual text can be used.

**MF:** So the issue creates a new form of paratext to the text, acting as a
kind of meta-index, they're a new form of publication themselves. To publish a
bibliography that actively links to the text itself is pretty cool. That also
makes me think within the structures of Aaaaarg it seems that certain parts of
the library are almost at breaking point - for instance the alphabetical
structure.

**SD:** Which is funny because it hasn't always been that alphabetical
structure either, it used to just be everything on one page, and then at some
point it was just taking too long for the page to load up A-Z. And today A is
as long as the entire index used to be, so yeah these questions of density and
scale are there but they've always been dealt with in a very ad hoc kind of
way, dealing with problems as they come. I'm sure that will happen. There
hasn't always been a search and, in a way, the issues, along with
alphabetising, became ways of creating more manageable lists, but even now the
list of issues is gigantic. These are problems of scale.

**MF:** So I guess there's also this kind of question that emerges in the
debate on reading habits and reading practices, this question of the breadth
of reading that people are engaging in. Do you see anything emerging in
Aaaaarg that suggests a new consistency of handling reading material? Is there
a specific quality, say, of the issues? For instance, some of them seem quite
focused, and others are very broad. They may provide insights into how new
forms of relationships to intellectual material may be emerging that we don't
quite yet know how to handle or recognise. This may be related to the lament
for the classic disciplinary road of deep reading of specific materials with a
relatively focused footprint whereas, it is argued, the net is encouraging a
much wider kind of sampling of materials with not necessarily so much depth.

**SD:** It's partially driven by people simply being in the system, in the
same way that the library structures our relationship to text, the net does it
in another way. One comment I've heard is that there's too much stuff on
Aaaaarg, which wasn't always the case. It used to be that I read every single
thing that was posted because it was slow enough and the things were short
enough that my response was, ‘Oh something new, great!' and I would read it.
But now, obviously that is totally impossible, there's too much; but in a way
that's just the state of things. It does seem like certain tactics of making
sense of things, of keeping things away and letting things in and queuing
things for reading later become just a necessary part of even navigating. It's
just the terrain at the moment, but this is only one instance. Even when I was
at the university and going to libraries, I ended up with huge stacks of books
and I'd just buy books that I was never going to read just to have them
available in my library, so I don't think feeling overwhelmed by books is
particularly new, just maybe the scale of it is. In terms of how people
actually conduct themselves and deal with that reality, it's difficult to say.
I think the issues are one of the few places where you would see any sort of
visible answers on Aaaaarg, otherwise it's totally anecdotal. At The Public
School we have organised classes in relationship to some of the issues, and
then we use the classes to also figure out what texts we are going to be
reading in the future, to make new issues and new classes. So it becomes an
organising group, reading and working its way through subject matter and
material, then revisiting that library and seeing what needs to be there.

**MF:** I want to follow that kind of strand of habits of accumulation,
sorting, deferring and so on. I wonder, what is a kind of characteristic or
unusual reading behavior? For instance are there people who download the
entire list? Or do you see people being relatively selective? How does the
mania of the net, with this constant churning of data, map over to forms of
bibliomania?

**SD:** Well, in Aaaaarg it's again very specific. Anecdotally again, I have
heard from people how much they download and sometimes they're very selective,
they just see something that's interesting and download it, other times they
download everything and occasionally I hear about this mania of mirroring the
whole site. What I mean about being specific to Aaaaarg is that a lot of the
mania isn't driven by just the need to have everything; it's driven by the
acknowledgement that the source is going to disappear at some point. That
sense of impending disappearance is always there, so I think that drives a lot
of people to download everything because, you know, it's happened a couple
times where it's just gone down or moved or something like that.

**MF:** It's true, it feels like something that is there even for a few weeks
or a few months. By a sheer fluke it could last another year, who knows.

**SD:** It's a different kind of mania, and usually we get lost in this
thinking that people need to possess everything but there is this weird
preservation instinct that people have, which is slightly different. The
dominant sensibility of Aaaaarg at the beginning was the highly partial and
subjective nature to the contents and that is something I would want to
preserve, which is why I never thought it to be particularly exciting to have
lots of high quality metadata - it doesn't have the publication date, it
doesn't have all the great metadata that say Amazon might provide. The system
is pretty dismal in that way, but I don't mind that so much. I read something
on the Internet which said it was like being in the porn section of a video
store with all black text on white labels, it was an absolutely beautiful way
of describing it. Originally Aaaaarg was about trading just those particular
moments in a text that really struck you as important, that you wanted other
people to read so it would be very short, definitely partial, it wasn't a
completist project, although some people maybe treat it in that way now. They
treat it as a thing that wants to devour everything. That's definitely not the
way that I have seen it.

**MF:** And it's so idiosyncratic I mean, you know it's certainly possible
that it could be read in a canonical mode, you can see that there's that
tendency there, of the core of Adorno or Agamben, to take the a's for
instance. But of the more contemporary stuff it's very varied, that's what's
nice about it as well. Alongside all the stuff that has a very long-term
existence, like historical books that may be over a hundred years old, what
turns up there is often unexpected, but certainly not random or
uninterpretable.

![](/sites/www.metamute.org/files/u1/malraux_web3_0.jpg)

Image: French art historian André Malraux lays out his _Musée Imaginaire_ ,
1947

**SD:** It's interesting to think a little bit about what people choose to
upload, because it's not easy to upload something. It takes a good deal of
time to scan a book. I mean obviously some things are uploaded which are, have
always been, digital. (I wrote something about this recently about the scan
and the export - the scan being something that comes out of a labour in
relationship to an object, to the book, and the export is something where the
whole life of the text has sort of been digital from production to circulation
and reception). I happen to think of Aaaaarg in the realm of the scan and the
bootleg. When someone actually scans something they're potentially spending
hours because they're doing the work on the book they're doing something with
software, they're uploading.

**MF:** Aaaarg hasn't introduced file quality thresholds either.

**SD:** No, definitely not. Where would that go?

**MF:** You could say with PDFs they have to be searchable texts?

**SD:** I'm sure a lot of people would prefer that. Even I would prefer it a
lot of the time. But again there is the idiosyncratic nature of what appears,
and there is also the idiosyncratic nature of the technical quality and
sometimes it's clear that the person that uploads something just has no real
experience of scanning anything. It's kind of an inevitable outcome. There are
movie sharing sites that are really good about quality control both in the
metadata and what gets up; but I think that if you follow that to the end,
then basically you arrive at the exported version being the Platonic text, the
impossible, perfect, clear, searchable, small - totally eliminating any trace
of what is interesting, the hand of reading and scanning, and this is what you
see with a lot of the texts on Aaaaarg. You see the hand of the person who's
read that book in the past, you see the hand of the person who scanned it.
Literally, their hand is in the scan. This attention to the labour of both
reading and redistributing, it's important to still have that.

**MF:** You could also find that in different ways for instance with a pdf, a
pdf that was bought directly as an ebook that's digitally watermarked will
have traces of the purchaser coded in there. So then there's also this work of
stripping out that data which will become a new kind of labour. So it doesn't
have this kind of humanistic refrain, the actual hand, the touch of the
labour. This is perhaps more interesting, the work of the code that strips it
out, so it's also kind of recognising that code as part of the milieu.

**SD:** Yeah, that is a good point, although I don't know that it's more
interesting labour.

**MF:** On a related note, The Public School as a model is interesting in that
it's kind of a convention, it has a set of rules, an infrastructure, a
website, it has a very modular being. Participants operate with a simple
organisational grammar which allows them to say ‘I want to learn this' or ‘I
want to teach this' and to draw in others on that basis. There's lots of
proposals for classes, some of them don't get taken up, but it's a process and
a set of resources which allow this aggregation of interest to occur. I just
wonder how you saw that kind of ethos of modularity in a way, as a set of
minimum rules or set of minimum capacities that allow a particular set of
things occur?

**SD:** This may not respond directly to what you were just talking about, but
there's various points of entry to the school and also having something that
people feel they can take on as their own and I think the minimal structure
invites quite a lot of projection as to what that means and what's possible
with it. If it's not doing what you want it to do or you think, ‘I'm not sure
what it is', there's the sense that you can somehow redirect it.

**MF:** It's also interesting that projection itself can become a technical
feature so in a way the work of the imagination is done also through this kind
of tuning of the software structure. The governance that was handled by the
technical infrastructure actually elicits this kind of projection, elicits the
imagination in an interesting way.

**SD:** Yeah, yeah, I totally agree and, not to put too much emphasis on the
software, although I think that there's good reason to look at both the
software and the conceptual diagram of the school itself, but really in a way
it would grind to a halt if it weren't for the very traditional labour of
people - like an organising committee. In LA there's usually around eight of
us (now Jordan Biren, Solomon Bothwell, Vladada Gallegos, Liz Glynn, Naoko
Miyano, Caleb Waldorf, and me) who are deeply involved in making that
translation of these wishes - thrown onto the website that somehow attract the
other people - into actual classes.

**MF:** What does the committee do?

**SD:** Even that's hard to describe and that's what makes it hard to set up.
It's always very particular to even a single idea, to a single class proposal.
In general it'd be things like scheduling, finding an instructor if an
instructor is what's required for that class. Sometimes it's more about
finding someone who will facilitate, other times it's rounding up materials.
But it could be helping an open proposal take some specific form. Sometimes
it's scanning things and putting them on Aaaaarg. Sometimes, there will be a
proposal - I proposed a class in the very, very beginning on messianic time, I
wanted to take a class on it - and it didn't happen until more than a year and
a half later.

**MF:** Well that's messianic time for you.

**SD:** That and the internet. But other times it will be only a week later.
You know we did one on the Egyptian revolution and its historical context,
something which demanded a very quick turnaround. Sometimes the committee is
going to classes and there will be a new conflict that arises within a class,
that they then redirect into the website for a future proposal, which becomes
another class: a point of friction where it's not just like next, and next,
and next, but rather it's a knot that people can't quite untie, something that
you want to spend more time with, but you may want to move on to other things
immediately, so instead you postpone that to the next class. A lot of The
Public School works like that: it's finding momentum then following it. A lot
of our classes are quite short, but we try and string them together. The
committee are the ones that orchestrate that. In terms of governance, it is
run collectively, although with the committee, every few months people drop
off and new people come on. There are some people who've been on for years.
Other people who stay on just for that point of time that feels right for
them. Usually, people come on to the committee because they come to a lot of
classes, they start to take an interest in the project and before they know it
they're administering it.

**Matthew Fuller's <[m.fuller@gold.ac.uk](mailto:m.fuller@gold.ac.uk)> most
recent book, _Elephant and Castle_ , is forthcoming from Autonomedia. **

**He is collated at**

**Footnotes**

1

2 [http://telic.info/ ](http://telic.info/)

3



tactics in Graziano 2018


Graziano
Pirate Care: How do we imagine the health care for the future we want?
2018


Pirate Care - How do we imagine the health care for the future we want?

Oct 5, 2018 · 19 min read

by Valeria Graziano

A recent trend to reimagine the systems of care for the future is based on many of the principles of self-organization. From the passive figure of the patient — an aptly named subject, patiently awaiting aid from medical staff and carers — researchers and policymakers are moving towards a model defined as people-powered health — where care is discussed as transforming from a top-down service to a network of coordinated actors.

At the same time, for large numbers of people, to self-organize around their own healthcare needs is not a matter of predilection, but increasingly one of necessity. In Greece, where the measures imposed by the Troika decimated public services, a growing number of grassroots clinics set up by the Solidarity Movement have been providing medical attention to those without a private insurance. In Italy, initiatives such as the Ambulatorio Medico Popolare in Milan offer free consultations to migrants and other vulnerable citizens.

The new characteristic in all of these cases is the fact that they frame what they do in clearly political terms, rejecting or sidestepping the more neutral ways in which the third sector and the NGOs have long presented care practices as apolitical, as ways to help out that should never ask questions bigger than the problems they set out to confront, and as standing beyond left and right (often for the sake of not alienating potential donors and funders).

Rather, the current trends towards self-organization in health care are very vocal and clear in their messages: the care system is in crisis, and we need to learn from what we know already. One thing we know is that the market or the financialization of assets cannot be the solution (do you remember when just a few years ago Occupy was buying back healthcare debts from financial speculators, thus saving thousands Americans from dire economic circumstances? Or that scene from Michael Moore’s Sicko, the documentary where a guy has to choose which finger to have amputated because he does not have enough cash for saving both?).

Another thing we also know is that we cannot simply hold on to past models of managing the public sector, as most national healthcare systems were built for the needs of the last century. Administrations have been struggling to adapt to the changing nature of health conditions (moving from a predominance of epidemic to chronic diseases) and the different needs of today’s populations. And finally, we most definitely know that to go back to even more conservative ideas that frame care as a private issue that should fall on the shoulders of family members (and most often, of female relatives) or hired servants (also gendered and racialised) is not the best we can come up with.

Among the many initiatives that are rethinking how we organize the provision of health and care in ways that are accessible, fair, and efficient, there are a number of actors — mostly small organizations — who are experimenting with the opportunities introduced by digital technologies. While many charities and NGOs remain largely ignorant of the opportunities offered by technology, these new actors are developing DIY devices, wearables, 3D-printed bespoke components, apps and smart objects to intervene in areas otherwise neglected by the bigger players in the care system. These practices are presenting a new mode of operating that I want to call ‘pirate care’.
Pirate Care

Piracy and Care are not always immediately relatable notions. The figure of the pirate in popular and media cultures is often associated with cunning intelligence and masculine modes of action, of people running servers which are allowing people to illegally download music or movie files. One of the very first organizations that articulated the stakes of sharing knowledge was actually named Piratbyrån. “When you pirate mp3s, you are downloading communism” was a popular motto at the time. And yet, bringing the idea of a pirate ethics into resonance with contemporary modes of care invites a different consideration for practices that propose a paradigm change and therefore inevitably position themselves in tricky positions vis-à-vis the law and the status quo. I have been noticing for a while now that another kind of contemporary pirate is coming to the fore in our messy society in the midst of many crises. This new kind of pirate could be best captured by another image: this time it is a woman, standing on the dock of a boat sailing through the Caribbean sea towards the Mexican Gulf, about to deliver abortion pills to other women for whom this option is illegal in their country.

Women on Waves, founded in 1999, engages in its abortion-on-boat missions every couple of years. They are mostly symbolic actions, as they are rather expensive operations, and yet they are potent means for stirring public debate and have often been met with hostility — even military fleets. So far, they have visited seven countries so far, including Mexico, Guatemala and, more recently, Ireland and Poland, where feminists movements have been mobilizing in huge numbers to reclaim reproductive rights.

According to official statistics, more than 47,000 women die every year from complications resulting from illegal, unsafe abortion procedures, a service used by over 21 million women who do not have another choice. As Leticia Zenevich, spokesperson of Women on Waves, told HuffPost: “The fact that women need to leave the state sovereignty to retain their own sovereignty ― it makes clear states are deliberately stopping women from accessing their human right to health.” Besides the boat campaigns, the organization also runs Women on Web, an online medical abortion service active since 2005. The service is active in 17 languages, and it is helping more than 100,000 women per year to get information and access abortion pills. More recently, Women on Waves also begun experimenting with the use of drones to deliver the pills in countries impacted by restrictive laws (such as Poland in 2015 and Northern Ireland in 2016).

Women on Waves are the perfect figure to begin to illustrate my idea of ‘pirate care’. By this term I want to bring attention to an emergent phenomenon in the contemporary world, where more and more often initiatives that want to bring support and care to the most vulnerable subjects in the most unstable situations, increasingly have to do so by operating in that grey zone that exists between the gaps left open by various rules, laws and technologies. Some thrive in this shadow area, carefully avoiding calling attention to themselves for fear of attracting ferocious polemics and the trolling that inevitably accompanies them. In other cases, care practices that were previously considered the norm have now been pushed towards illegality.

Consider for instance the situation highlighted by the Docs Not Cops campaign that started in the UK four years ago, when the government had just introduced its ‘hostile environment’ policy with the aim to make everyday life as hard as possible for migrants with an irregular status. Suddenly, medical staff in hospitals and other care facilities were supposed to carry out document checks before being allowed to offer any assistance. Their mobilization denounced the policy as an abuse of mandate on the part of the Home Office and a threat to public health, given that it effectively discouraged patients to seek help for fear of retaliations. Another sadly famous example of this trend of pushing many acts of care towards illegality would the straitjacketing and criminalization of migrant rescuing NGOs in the Mediterranean on the part of various European countries, a policy led by Italian government. Yet another example would be the increasing number of municipal decrees that make it a crime to offer food, money or shelter to the homeless in many cities in North America and Europe.
Hacker Ethics

This scenario reminds us of the tragic story of Antigone and the age-old question of what to do when the relationship between what the law says and one what feels it is just becomes fraught with tensions and contradictions. Here, the second meaning of ‘pirate care’ becomes apparent as it points to the way in which a number of initiatives have been responding to the current crisis by mobilizing tactics and ethics as first developed within the hacker movement.

As described by Steven Levy in Hackers, the general principles of a hacker ethic include sharing, openness, decentralization, free access to knowledge and tools, and an effort of contributing to society’s democratic wellbeing. To which we could add, following Richard Stallman, founder of the free software movement, that “bureaucracy should not be allowed to get in the way of doing anything useful.” While here Stallman was reflecting on the experience of the M.I.T. AI Lab in 1971, his critique of bureaucracy captures well a specific trait of the techno-political nexus that is also shaping the present moment: as more technologies come to mediate everyday interactions, they are also reshaping the very structure of the institutions and organizations we inhabit, so that our lives are increasingly formatted to meet the requirements of an unprecedented number of standardised procedures, compulsory protocols, and legal obligations.

According to anthropologists David Graeber, we are living in an era of “total bureaucratization”. But while contemporary populism often presents bureaucracy as a problem of the public sector, implicitly suggesting “the market” to be the solution, Graeber’s study highlights how historically all so-called “free markets” have actually been made possible through the strict enforcement of state regulations. Since the birth of the modern corporation in 19th century America, “bureaucratic techniques (performance reviews, focus groups, time allocation surveys …) developed in financial and corporate circles came to invade the rest of society — education, science, government — and eventually, to pervade almost every aspect of everyday life.”
The forceps and the speculum

And thus, in resonance with the tradition of hacker ethics, a number of ‘pirate care’ practices are intervening in reshaping what looking after our collective health will look like in the future. CADUS, for example, is a Berlin based NGO which has recently set up a Crisis Response Makerspace to build open and affordable medical equipment specifically designed to bring assistance in extreme crisis zones where not many other organizations would venture, such as Syria and Northern Iraq. After donating their first mobile hospital to the Kurdish Red Crescent last year, CADUS is now working to develop a second version, in a container this time, able to be deployed in conflict zones deprived of any infrastructure, and a civil airdrop system to deliver food and medical equipment as fast as possible. The fact that CADUS adopted the formula of the makerspace to invent open emergency solutions that no private company would be interested in developing is not a coincidence, but emerges from a precise vision of how healthcare innovations should be produced and disseminated, and not only for extreme situations.

“Open source is the only way for medicine” — says Marcus Baw of Open Health Hub — as “medical software now is medicine”. Baw has been involved in another example of ‘pirate care’ in the UK, founding a number of initiatives to promote the adoption of open standards, open source code, and open governance in Health IT. The NHS spends about £500 million each time it refreshes Windows licenses, and aside from avoiding the high costs, an open source GP clinical system would be the only way to address the pressing ethical issue facing contemporary medicine: as software and technology become more and more part of the practice of medicine itself, they need to be subject to peer-review and scrutiny to assess their clinical safety. Moreover, that if such solutions are found to be effective and safe lives, it is the duty of all healthcare practitioners to share their knowledge with the rest of humanity, as per the Hippocratic Oath. To illustrate what happens when medical innovations are kept secret, Baw shares the story of the Chamberlen family of obstetricians, who kept the invention of the obstetric forceps, a family trade secret for over 150 years, using the tool only to treat their elite clientele of royals and aristocracy. As a result, thousands of mothers and babies likely died in preventable circumstances.

It is perhaps significant that such a sad historical example of the consequences ofclosed medicine must come from the field of gynaecology, one of the most politically charged areas of medical specialization to this day. So much so that last year another collective of ‘pirate carers’ named GynePunk developed a biolab toolkit for emergency gynaecological care, to allow those excluded from the reproductive healthcare — undocumented migrants, trans and queer women, drug users and sex workers — to perform basic checks on their own bodily fluids. Their prototypes include a centrifuge, a microscope and an incubator that can be cheaply realised by repurposing components of everyday items such as DVD players and computer fans, or by digital fabrication. In 2015, GynePunk also developed a 3D-printable speculum and — who knows? — perhaps their next project might include a pair of forceps…

As the ‘pirate care’ approach keeps proliferating more and more, its tools and modes of organizing is keeping alive a horizon in which healthcare is not de facto reduced to a privilege.

PS. This article was written before the announcement of the launch of Mediterranea, which we believe to be another important example of pirate care. #piratecare #abbiamounanave


tactics in Graziano, Mars & Medak 2019


Graziano, Mars & Medak
Learning from #Syllabus
2019


ACTIONS

LEARNING FROM
#SYLLABUS
VALERIA GRAZIANO,
MARCELL MARS,
TOMISLAV MEDAK

115

116

STATE MACHINES

LEARNING FROM #SYLLABUS
VALERIA GRAZIANO, MARCELL MARS, TOMISLAV MEDAK
The syllabus is the manifesto of the 21st century.
—Sean Dockray and Benjamin Forster1
#Syllabus Struggles
In August 2014, Michael Brown, an 18-year-old boy living in Ferguson, Missouri,
was fatally shot by police officer Darren Wilson. Soon after, as the civil protests denouncing police brutality and institutional racism began to mount across the United
States, Dr. Marcia Chatelain, Associate Professor of History and African American
Studies at Georgetown University, launched an online call urging other academics
and teachers ‘to devote the first day of classes to a conversation about Ferguson’ and ‘to recommend texts, collaborate on conversation starters, and inspire
dialogue about some aspect of the Ferguson crisis.’2 Chatelain did so using the
hashtag #FergusonSyllabus.
Also in August 2014, using the hashtag #gamergate, groups of users on 4Chan,
8Chan, Twitter, and Reddit instigated a misogynistic harassment campaign against
game developers Zoë Quinn and Brianna Wu, media critic Anita Sarkeesian, as well as
a number of other female and feminist game producers, journalists, and critics. In the
following weeks, The New Inquiry editors and contributors compiled a reading list and
issued a call for suggestions for their ‘TNI Syllabus: Gaming and Feminism’.3
In June 2015, Donald Trump announced his candidacy for President of the United
States. In the weeks that followed, he became the presumptive Republican nominee,
and The Chronicle of Higher Education introduced the syllabus ‘Trump 101’.4 Historians N.D.B. Connolly and Keisha N. Blain found ‘Trump 101’ inadequate, ‘a mock college syllabus […] suffer[ing] from a number of egregious omissions and inaccuracies’,
failing to include ‘contributions of scholars of color and address the critical subjects
of Trump’s racism, sexism, and xenophobia’. They assembled ‘Trump Syllabus 2.0’.5
Soon after, in response to a video in which Trump engaged in ‘an extremely lewd
conversation about women’ with TV host Billy Bush, Laura Ciolkowski put together a
‘Rape Culture Syllabus’.6

1
2
3
4
5
6

Sean Dockray, Benjamin Forster, and Public Office, ‘README.md’, Hyperreadings, 15 February
2018, https://samiz-dat.github.io/hyperreadings/.
Marcia Chatelain, ‘Teaching the #FergusonSyllabus’, Dissent Magazine, 28 November 2014,
https://www.dissentmagazine.org/blog/teaching-ferguson-syllabus/.
‘TNI Syllabus: Gaming and Feminism’, The New Inquiry, 2 September 2014, https://thenewinquiry.
com/tni-syllabus-gaming-and-feminism/.
‘Trump 101’, The Chronicle of Higher Education, 19 June 2016, https://www.chronicle.com/article/
Trump-Syllabus/236824/.
N.D.B. Connolly and Keisha N. Blain, ‘Trump Syllabus 2.0’, Public Books, 28 June 2016, https://
www.publicbooks.org/trump-syllabus-2-0/.
Laura Ciolkowski, ‘Rape Culture Syllabus’, Public Books, 15 October 2016, https://www.
publicbooks.org/rape-culture-syllabus/.

ACTIONS

117

In April 2016, members of the Standing Rock Sioux tribe established the Sacred Stone
Camp and started the protest against the Dakota Access Pipeline, the construction of
which threatened the only water supply at the Standing Rock Reservation. The protest at the site of the pipeline became the largest gathering of native Americans in
the last 100 years and they earned significant international support for their ReZpect
Our Water campaign. As the struggle between protestors and the armed forces unfolded, a group of Indigenous scholars, activists, and supporters of the struggles of
First Nations people and persons of color, gathered under the name the NYC Stands
for Standing Rock Committee, put together #StandingRockSyllabus.7
The list of online syllabi created in response to political struggles has continued to
grow, and at present includes many more examples:
All Monuments Must Fall Syllabus
#Blkwomensyllabus
#BLMSyllabus
#BlackIslamSyllabus
#CharlestonSyllabus
#ColinKaepernickSyllabus
#ImmigrationSyllabus
Puerto Rico Syllabus (#PRSyllabus)
#SayHerNameSyllabus
Syllabus for White People to Educate Themselves
Syllabus: Women and Gender Non-Conforming People Writing about Tech
#WakandaSyllabus
What To Do Instead of Calling the Police: A Guide, A Syllabus, A Conversation, A
Process
#YourBaltimoreSyllabus
It would be hard to compile a comprehensive list of all the online syllabi that have
been created by social justice movements in the last five years, especially, but not
exclusively, those initiated in North America in the context of feminist and anti-racist
activism. In what is now a widely spread phenomenon, these political struggles use
social networks and resort to the hashtag template ‘#___Syllabus’ to issue calls for
the bottom-up aggregation of resources necessary for political analysis and pedagogy
centering on their concerns. For this reason, we’ll call this phenomenon ‘#Syllabus’.
During the same years that saw the spread of the #Syllabus phenomenon, university
course syllabi have also been transitioning online, often in a top-down process initiated
by academic institutions, which has seen the syllabus become a contested document
in the midst of increasing casualization of teaching labor, expansion of copyright protections, and technology-driven marketization of education.
In what follows, we retrace the development of the online syllabus in both of these
contexts, to investigate the politics enmeshed in this new media object. Our argument

7

‘#StandingRockSyllabus’, NYC Stands with Standing Rock, 11 October 2016, https://
nycstandswithstandingrock.wordpress.com/standingrocksyllabus/.

118

STATE MACHINES

is that, on the one hand, #Syllabus names the problem of contemporary political culture as pedagogical in nature, while, on the other hand, it also exposes academicized
critical pedagogy and intellectuality as insufficiently political in their relation to lived
social reality. Situating our own stakes as both activists and academics in the present
debate, we explore some ways in which the radical politics of #Syllabus could be supported to grow and develop as an articulation of solidarity between amateur librarians
and radical educators.
#Syllabus in Historical Context: Social Movements and Self-Education
When Professor Chatelain launched her call for #FergusonSyllabus, she was mainly
addressing a community of fellow educators:
I knew Ferguson would be a challenge for teachers: When schools opened across
the country, how were they going to talk about what happened? My idea was simple, but has resonated across the country: Reach out to the educators who use
Twitter. Ask them to commit to talking about Ferguson on the first day of classes.
Suggest a book, an article, a film, a song, a piece of artwork, or an assignment that
speaks to some aspect of Ferguson. Use the hashtag: #FergusonSyllabus.8
Her call had a much greater resonance than she had originally anticipated as it reached
beyond the limits of the academic community. #FergusonSyllabus had both a significant impact in shaping the analysis and the response to the shooting of Michael
Brown, and in inspiring the many other #Syllabus calls that soon followed.
The #Syllabus phenomenon comprises different approaches and modes of operating. In some cases, the material is clearly claimed as the creation of a single individual, as in the case of #BlackLivesMatterSyllabus, which is prefaced on the project’s
landing page by a warning to readers that ‘material compiled in this syllabus should
not be duplicated without proper citation and attribution.’9 A very different position on
intellectual property has been embraced by other #Syllabus interventions that have
chosen a more commoning stance. #StandingRockSyllabus, for instance, is introduced as a crowd-sourced process and as a useful ‘tool to access research usually
kept behind paywalls.’10
The different workflows, modes of engagements, and positioning in relation to
intellectual property make #Syllabus readable as symptomatic of the multiplicity
that composes social justice movements. There is something old school—quite
literally—about the idea of calling a list of online resources a ‘syllabus’; a certain
quaintness, evoking thoughts of teachers and homework. This is worthy of investigation especially if contrasted with the attention dedicated to other online cultural
phenomena such as memes or fake news. Could it be that the online syllabus offers

8

9
10

Marcia Chatelain, ‘How to Teach Kids About What’s Happening in Ferguson’, The Atlantic, 25
August 2014, https://www.theatlantic.com/education/archive/2014/08/how-to-teach-kids-aboutwhats-happening-in-ferguson/379049/.
Frank Leon Roberts, ‘Black Lives Matter: Race, Resistance, and Populist Protest’, 2016, http://
www.blacklivesmattersyllabus.com/fall2016/.
‘#StandingRockSyllabus’, NYC Stands with Standing Rock, 11 October 2016, https://
nycstandswithstandingrock.wordpress.com/standingrocksyllabus/.

ACTIONS

119

a useful, fresh format precisely for the characteristics that foreground its connections to older pedagogical traditions and techniques, predating digital cultures?
#Syllabus can indeed be analyzed as falling within a long lineage of pedagogical tools
created by social movements to support processes of political subjectivation and the
building of collective consciousness. Activists and militant organizers have time and
again created and used various textual media objects—such as handouts, pamphlets,
cookbooks, readers, or manifestos—to facilitate a shared political analysis and foment
mass political mobilization.
In the context of the US, anti-racist movements have historically placed great emphasis on critical pedagogy and self-education. In 1964, the Council of Federated Organizations (an alliance of civil rights initiatives) and the Student Nonviolent
Coordinating Committee (SNCC), created a network of 41 temporary alternative
schools in Mississippi. Recently, the Freedom Library Project, a campaign born out
of #FergusonSyllabus to finance under-resourced pedagogical initiatives, openly
referenced this as a source of inspiration. The Freedom Summer Project of 1964
brought hundreds of activists, students, and scholars (many of whom were white)
from the north of the country to teach topics and issues that the discriminatory
state schools would not offer to black students. In the words of an SNCC report,
Freedom Schools were established following the belief that ‘education—facts to
use and freedom to use them—is the basis of democracy’,11 a conviction echoed
by the ethos of contemporary #Syllabus initiatives.
Bob Moses, a civil rights movement leader who was the head of the literary skills initiative in Mississippi, recalls the movement’s interest, at the time, in teaching methods
that used the very production of teaching materials as a pedagogical tool:
I had gotten hold of a text and was using it with some adults […] and noticed that
they couldn’t handle it because the pictures weren’t suited to what they knew […]
That got me into thinking about developing something closer to what people were
doing. What I was interested in was the idea of training SNCC workers to develop
material with the people we were working with.12
It is significant that for him the actual use of the materials the group created was much
less important than the process of producing the teaching materials together. This focus
on what could be named as a ‘pedagogy of teaching’, or perhaps more accurately ‘the
pedagogy of preparing teaching materials’, is also a relevant mechanism at play in the
current #Syllabus initiatives, as their crowdsourcing encourages different kinds of people
to contribute what they feel might be relevant resources for the broader movement.
Alongside the crucial import of radical black organizing, another relevant genealogy in
which to place #Syllabus would be the international feminist movement and, in particular, the strategies developed in the 70s campaign Wages for Housework, spearheaded

11
12

Daniel Perlstein, ‘Teaching Freedom: SNCC and the Creation of the Mississippi Freedom Schools’,
History of Education Quarterly 30.3 (Autumn 1990): 302.
Perlstein, ‘Teaching Freedom’: 306.

120

STATE MACHINES

by Selma James and Silvia Federici. The Wages for Housework campaign drove home
the point that unwaged reproductive labor provides a foundation for capitalist exploitation. They wanted to encourage women to denaturalize and question the accepted
division of labor into remunerated work outside the house and labor of love within
the confines of domesticity, discussing taboo topics such as ‘prostitution as socialized housework’ and ‘forced sterilization’ as issues impacting poor, often racialized,
women. The organizing efforts of Wages for Housework held political pedagogy at their
core. They understood that that pedagogy required:
having literature and other materials available to explain our goals, all written in a
language that women can understand. We also need different types of documents,
some more theoretical, others circulating information about struggles. It is important
that we have documents for women who have never had any political experience.
This is why our priority is to write a popular pamphlet that we can distribute massively and for free—because women have no money.13
The obstacles faced by the Wages for Housework campaign were many, beginning
with the issue of how to reach a dispersed constituency of isolated housewives
and how to keep the revolutionary message at the core of their claims accessible
to different groups. In order to tackle these challenges, the organizers developed
a number of innovative communication tactics and pedagogical tools, including
strategies to gain mainstream media coverage, pamphlets and leaflets translated
into different languages,14 a storefront shop in Brooklyn, and promotional tables at
local events.
Freedom Schools and the Wages for Housework campaign are only two amongst
the many examples of the critical pedagogies developed within social movements.
The #Syllabus phenomenon clearly stands in the lineage of this history, yet we should
also highlight its specificity in relation to the contemporary political context in which it
emerged. The #Syllabus acknowledges that since the 70s—and also due to students’
participation in protests and their display of solidarity with other political movements—
subjects such as Marxist critical theory, women studies, gender studies, and African
American studies, together with some of the principles first developed in critical pedagogy, have become integrated into the educational system. The fact that many initiators of #Syllabus initiatives are women and Black academics speaks to this historical
shift as an achievement of that period of struggles. However, the very necessity felt by
these educators to kick-start their #Syllabus campaigns outside the confines of academia simultaneously reveals the difficulties they encounter within the current privatized and exclusionary educational complex.

13
14

Silvia Federici and Arlen Austin (eds) The New York Wages for Housework Committee 1972-1977:
History, Theory and Documents. New York: Autonomedia, 2017: 37.
Some of the flyers and pamphlets were digitized by MayDay Rooms, ‘a safe haven for historical
material linked to social movements, experimental culture and the radical expression of
marginalised figures and groups’ in London, and can be found in their online archive: ‘Wages
for Housework: Pamphlets – Flyers – Photographs’, MayDay Rooms, http://maydayrooms.org/
archives/wages-for-housework/wfhw-pamphlets-flyers-photographs/.

ACTIONS

121

#Syllabus as a Media Object
Besides its contextualization within the historical legacy of previous grassroots mobilizations, it is also necessary to discuss #Syllabus as a new media object in its own
right, in order to fully grasp its relevance for the future politics of knowledge production and transmission.
If we were to describe this object, a #Syllabus would be an ordered list of links to
scholarly texts, news reports, and audiovisual media, mostly aggregated through a
participatory and iterative process, and created in response to political events indicative of larger conditions of structural oppression. Still, as we have seen, #Syllabus
as a media object doesn’t follow a strict format. It varies based on the initial vision
of their initiators, political causes, and social composition of the relevant struggle.
Nor does it follow the format of traditional academic syllabi. While a list of learning
resources is at the heart of any syllabus, a boilerplate university syllabus typically
also includes objectives, a timetable, attendance, coursework, examination, and an
outline of the grading system used for the given course. Relieved of these institutional
requirements, the #Syllabus typically includes only a reading list and a hashtag. The
reading list provides resources for understanding what is relevant to the here and
now, while the hashtag provides a way to disseminate across social networks the call
to both collectively edit and teach what is relevant to the here and now. Both the list
and the hashtag are specificities and formal features of the contemporary (internet)
culture and therefore merit further exploration in relation to the social dynamics at
play in #Syllabus initiatives.
The different phases of the internet’s development approached the problem of the
discoverability of relevant information in different ways. In the early days, the Gopher
protocol organized information into a hierarchical file tree. With the rise of World Wide
Web (WWW), Yahoo tried to employ experts to classify and catalog the internet into
a directory of links. That seemed to be a successful approach for a while, but then
Google (founded in 1998) came along and started to use a webgraph of links to rank
the importance of web pages relative to a given search query.
In 2005, Clay Shirky wrote the essay ‘Ontology is Overrated: Categories, Links and
Tags’,15 developed from his earlier talk ‘Folksonomies and Tags: The Rise of User-Developed Classification’. Shirky used Yahoo’s attempt to categorize the WWW to argue
against any attempt to classify a vast heterogenous body of information into a single
hierarchical categorical system. In his words: ‘[Yahoo] missed [...] that, if you’ve got
enough links, you don’t need the hierarchy anymore. There is no shelf. There is no file
system. The links alone are enough.’ Those words resonated with many. By following
simple formatting rules, we, the internet users, whom Time magazine named Person of
the Year in 2006, proved that it is possible to collectively write the largest encyclopedia
ever. But, even beyond that, and as per Shirky’s argument, if enough of us organized
our own snippets of the vast body of the internet, we could replace old canons, hierarchies, and ontologies with folksonomies, social bookmarks, and (hash)tags.

15

Clay Shirky, ‘Ontology Is Overrated: Categories, Links, and Tags’, 2005, http://shirky.com/writings/
herecomeseverybody/ontology_overrated.html.

122

STATE MACHINES

Very few who lived through those times would have thought that only a few years later
most user-driven services would be acquired by a small number of successful companies and then be shut down. Or, that Google would decide not to include the biggest
hashtag-driven platform, Twitter, into its search index and that the search results on
its first page would only come from a handful of usual suspects: media conglomerates, Wikipedia, Facebook, LinkedIn, Amazon, Reddit, Quora. Or, that Twitter would
become the main channel for the racist, misogynist, fascist escapades of the President
of United States.
This internet folk naivety—stoked by an equally enthusiastic, venture-capital-backed
startup culture—was not just naivety. This was also a period of massive experimental
use of these emerging platforms. Therefore, this history would merit to be properly
revisited and researched. In this text, however, we can only hint to this history: to contextualize how the hashtag as a formalization initially emerged, and how with time the
user-driven web lost some of its potential. Nonetheless, hashtags today still succeed in
propagating political mobilizations in the network environment. Some will say that this
propagation is nothing but a reflection of the internet as a propaganda machine, and
there’s no denying that hashtags do serve a propaganda function. However, it equally
matters that hashtags retain the capacity to shape coordination and self-organization,
and they are therefore a reflection of the internet as an organization machine.
As mentioned, #Syllabus as a media object is an ordered list of links to resources.
In the long history of knowledge retrieval systems and attempts to help users find
relevant information from big archives, the list on the internet continues in the tradition of the index card catalog in libraries, of charts in the music industry, or mixtapes
and playlists in popular culture, helping people tell their stories of what is relevant and
what isn’t through an ordered sequence of items. The list (as a format) together with
the hashtag find themselves in the list (pun intended) of the most iconic media objects
of the internet. In the network media environment, being smart in creating new lists
became the way to displace old lists of relevance, the way to dismantle canons, the
way to unlearn. The way to become relevant.
The Academic Syllabus Migrates Online
#Syllabus interventions are a challenge issued by political struggles to educators as
they expose a fundamental contradiction in the operations of academia. While critical pedagogies of yesteryear’s social movements have become integrated into the
education system, the radical lessons that these pedagogies teach students don’t
easily reconcile with their experience: professional practice courses, the rethoric of
employability and compulsory internships, where what they learn is merely instrumental, leaves them wondering how on earth they are to apply their Marxism or feminism
to their everyday lives?
Cognitive dissonance is at the basis of degrees in the liberal arts. And to make things
worse, the marketization of higher education, the growing fees and the privatization
of research has placed universities in a position where they increasingly struggle to
provide institutional space for critical interventions in social reality. As universities become more dependent on the ‘customer satisfaction’ of their students for survival, they
steer away from heated political topics or from supporting faculty members who might
decide to engage with them. Borrowing the words of Stefano Harney and Fred Moten,

ACTIONS

123

‘policy posits curriculum against study’,16 creating the paradoxical situation wherein
today’s universities are places in which it is possible to do almost everything except
study. What Harney and Moten propose instead is the re-appropriation of the diffuse
capacity of knowledge generation that stems from the collective processes of selforganization and commoning. As Moten puts it: ‘When I think about the way we use the
term ‘study,’ I think we are committed to the idea that study is what you do with other
people.’17 And it is this practice of sharing a common repertoire—what Moten and
Harney call ‘rehearsal’18—that is crucially constitutive of a crowdsourced #Syllabus.
This contradiction and the tensions it brings to contemporary neoliberal academia can
be symptomatically observed in the recent evolution of the traditional academic syllabus. As a double consequence of (some) critical pedagogies becoming incorporated
into the teaching process and universities striving to reduce their liability risks, academic syllabi have become increasingly complex and extensive documents. They are
now understood as both a ‘social contract’ between the teachers and their students,
and ‘terms of service’19 between the institution providing educational services and the
students increasingly framed as sovereign consumers making choices in the market of
educational services. The growing official import of the syllabus has had the effect that
educators have started to reflect on how the syllabus translates the power dynamics
into their classroom. For instance, the critical pedagogue Adam Heidebrink-Bruno has
demanded that the syllabus be re-conceived as a manifesto20—a document making
these concerns explicit. And indeed, many academics have started to experiment with
the form and purpose of the syllabus, opening it up to a process of co-conceptualization with their students, or proposing ‘the other syllabus’21 to disrupt asymmetries.
At the same time, universities are unsurprisingly moving their syllabi online. A migration
that can be read as indicative of three larger structural shifts in academia.
First, the push to make syllabi available online, initiated in the US, reinforces the differential effects of reputation economy. It is the Ivy League universities and their professorial star system that can harness the syllabus to advertise the originality of their
scholarship, while the underfunded public universities and junior academics are burdened with teaching the required essentials. This practice is tied up with the replication
in academia of the different valorization between what is considered to be the labor of
production (research) and that of social reproduction (teaching). The low esteem (and
corresponding lower rewards and remuneration) for the kinds of intellectual labors that
can be considered labors of care—editing journals, reviewing papers or marking, for
instance—fits perfectly well with the gendered legacies of the academic institution.

Stefano Harney and Fred Moten, The Undercommons: Fugitive Planning & Black Study, New York:
Autonomedia, 2013, p. 81.
17 Harney and Moten, The Undercommons, p. 110.
18 Harney and Moten, The Undercommons, p. 110.
19 Angela Jenks, ‘It’s In The Syllabus’, Teaching Tools, Cultural Anthropology website, 30 June 2016,
https://culanth.org/fieldsights/910-it-s-in-the-syllabu/.
20 Adam Heidebrink-Bruno, ‘Syllabus as Manifesto: A Critical Approach to Classroom Culture’,
Hybrid Pedagogy, 28 August 2014, http://hybridpedagogy.org/syllabus-manifesto-criticalapproach-classroom-culture/.
21 Lucy E. Bailey, ‘The “Other” Syllabus: Rendering Teaching Politics Visible in the Graduate
Pedagogy Seminar’, Feminist Teacher 20.2 (2010): 139–56.
16

124

STATE MACHINES

Second, with the withdrawal of resources to pay precarious and casualized academics during their ‘prep’ time (that is, the time in which they can develop new
course material, including assembling new lists of references, updating their courses as well as the methodologies through which they might deliver these), syllabi
now assume an ambivalent role between the tendencies for collectivization and
individualization of insecurity. The reading lists contained in syllabi are not covered
by copyrights; they are like playlists or recipes, which historically had the effect of
encouraging educators to exchange lesson plans and make their course outlines
freely available as a valuable knowledge common. Yet, in the current climate where
universities compete against each other, the authorial function is being extended
to these materials too. Recently, US universities have been leading a trend towards
the interpretation of the syllabus as copyrightable material, an interpretation that
opened up, as would be expected, a number of debates over who is a syllabus’
rightful owner, whether the academics themselves or their employers. If the latter interpretation were to prevail, this would enable universities to easily replace
academics while retaining their contributions to the pedagogical offer. The fruits of
a teacher’s labor could thus be turned into instruments of their own deskilling and
casualization: why would universities pay someone to write a course when they can
recycle someone else’s syllabus and get a PhD student or a precarious post doc to
teach the same class at a fraction of the price?
This tendency to introduce a logic of property therefore spurs competitive individualism and erasure of contributions from others. Thus, crowdsourcing the syllabus
in the context of growing precarization of labor risks remaining a partial process,
as it might heighten the anxieties of those educators who do not enjoy the security
of a stable job and who are therefore the most susceptible to the false promises of
copyright enforcement and authorship understood as a competitive, small entrepreneurial activity. However, when inserted in the context of live, broader political
struggles, the opening up of the syllabus could and should be an encouragement
to go in the opposite direction, providing a ground to legitimize the collective nature
of the educational process and to make all academic resources available without
copyright restrictions, while devising ways to secure the proper attribution and the
just remuneration of everyone’s labor.
The introduction of the logic of property is hard to challenge as it is furthered by commercial academic publishers. Oligopolists, such as Elsevier, are not only notorious for
using copyright protections to extract usurious profits from the mostly free labor of
those who write, peer review, and edit academic journals,22 but they are now developing all sorts of metadata, metrics, and workflow systems that are increasingly becoming central for teaching and research. In addition to their publishing business, Elsevier
has expanded its ‘research intelligence’ offering, which now encompasses a whole
range of digital services, including the Scopus citation database; Mendeley reference
manager; the research performance analytics tools SciVal and Research Metrics; the
centralized research management system Pure; the institutional repository and pub-

22 Vincent Larivière, Stefanie Haustein, and Philippe Mongeon, ‘The Oligopoly of Academic
Publishers in the Digital Era’, PLoS ONE 10.6 (10 June 2015),https://journals.plos.org/plosone/
article?id=10.1371/journal.pone.0127502/.

ACTIONS

125

lishing platform Bepress; and, last but not least, grant discovery and funding flow tools
Funding Institutional and Elsevier Funding Solutions. Given how central digital services
are becoming in today’s universities, whoever owns these platforms is the university.
Third, the migration online of the academic syllabus falls into larger efforts by universities to ‘disrupt’ the educational system through digital technologies. The introduction
of virtual learning environments has led to lesson plans, slides, notes, and syllabi becoming items to be deposited with the institution. The doors of public higher education are being opened to commercial qualification providers by means of the rise in
metrics-based management, digital platforming of university services, and transformation of students into consumers empowered to make ‘real-time’ decisions on how to
spend their student debt.23 Such neoliberalization masquerading behind digitization
is nowhere more evident than in the hype that was generated around Massive Open
Online Courses (MOOCs), exactly at the height of the last economic crisis.
MOOCs developed gradually from the Massachusetts Institute of Techology’s (MIT) initial experiments with opening up its teaching materials to the public through the OpenCourseWare project in 2001. By 2011, MOOCs were saluted as a full-on democratization of access to ‘Ivy-League-caliber education [for] the world’s poor.’24 And yet, their
promise quickly deflated following extremely low completion rates (as low as 5%).25
Believing that in fifty years there will be no more than 10 institutions globally delivering
higher education,26 by the end of 2013 Sebastian Thrun (Google’s celebrated roboticist
who in 2012 founded the for-profit MOOC platform Udacity), had to admit that Udacity
offered a ‘lousy product’ that proved to be a total failure with ‘students from difficult
neighborhoods, without good access to computers, and with all kinds of challenges in
their lives.’27 Critic Aaron Bady has thus rightfully argued that:
[MOOCs] demonstrate what the technology is not good at: accreditation and mass
education. The MOOC rewards self-directed learners who have the resources and
privilege that allow them to pursue learning for its own sake [...] MOOCs are also a
really poor way to make educational resources available to underserved and underprivileged communities, which has been the historical mission of public education.28
Indeed, the ‘historical mission of public education’ was always and remains to this
day highly contested terrain—the very idea of a public good being under attack by
dominant managerial techniques that try to redefine it, driving what Randy Martin

23 Ben Williamson, ‘Number Crunching: Transforming Higher Education into “Performance Data”’,
Medium, 16 August 2018, https://medium.com/ussbriefs/number-crunching-transforming-highereducation-into-performance-data-9c23debc4cf7.
24 Max Chafkin, ‘Udacity’s Sebastian Thrun, Godfather Of Free Online Education, Changes Course’,
FastCompany, 14 November 2013, https://www.fastcompany.com/3021473/udacity-sebastianthrun-uphill-climb/.
25 ‘The Rise (and Fall?) Of the MOOC’, Oxbridge Essays, 14 November 2017, https://www.
oxbridgeessays.com/blog/rise-fall-mooc/.
26 Steven Leckart, ‘The Stanford Education Experiment Could Change Higher Learning Forever’,
Wired, 20 March 2012, https://www.wired.com/2012/03/ff_aiclass/.
27 Chafkin, ‘Udacity’s Sebastian Thrun’.
28 Aaron Bady, ‘The MOOC Moment and the End of Reform’, Liberal Education 99.4 (Fall 2013),
https://www.aacu.org/publications-research/periodicals/mooc-moment-and-end-reform.

126

STATE MACHINES

aptly called the ‘financialization of daily life.’29 The failure of MOOCs finally points to a
broader question, also impacting the vicissitudes of #Syllabus: Where will actual study
practices find refuge in the social, once the social is made directly productive for capital at all times? Where will study actually ‘take place’, in the literal sense of the phrase,
claiming the resources that it needs for co-creation in terms of time, labor, and love?
Learning from #Syllabus
What have we learned from the #Syllabus phenomenon?
The syllabus is the manifesto of 21st century.
Political struggles against structural discrimination, oppression, and violence in the
present are continuing the legacy of critical pedagogies of earlier social movements
that coupled the process of political subjectivation with that of collective education.
By creating effective pedagogical tools, movements have brought educators and students into the fold of their struggles. In the context of our new network environment,
political struggles have produced a new media object: #Syllabus, a crowdsourced list
of resources—historic and present—relevant to a cause. By doing so, these struggles
adapt, resist, and live in and against the networks dominated by techno-capital, with
all of the difficulties and contradictions that entails.
What have we learned from the academic syllabus migrating online?
In the contemporary university, critical pedagogy is clashing head-on with the digitization of higher education. Education that should empower and research that should
emancipate are increasingly left out in the cold due to the data-driven marketization
of academia, short-cutting the goals of teaching and research to satisfy the fluctuating demands of labor market and financial speculation. Resistance against the capture of data, research workflows, and scholarship by means of digitization is a key
struggle for the future of mass intellectuality beyond exclusions of class, disability,
gender, and race.
What have we learned from #Syllabus as a media object?
As old formats transform into new media objects, the digital network environment defines the conditions in which these new media objects try to adjust, resist, and live. A
right intuition can intervene and change the landscape—not necessarily for the good,
particularly if the imperatives of capital accumulation and social control prevail. We
thus need to re-appropriate the process of production and distribution of #Syllabus
as a media object in its totality. We need to build tools to collectively control the workflows that are becoming the infrastructures on top of which we collaboratively produce
knowledge that is vital for us to adjust, resist, and live. In order to successfully intervene in the world, every aspect of production and distribution of these new media objects becomes relevant. Every single aspect counts. The order of items in a list counts.
The timestamp of every version of the list counts. The name of every contributor to

29 Randy Martin, Financialization Of Daily Life, Philadelphia: Temple University Press, 2002.

ACTIONS

127

every version of the list counts. Furthermore, the workflow to keep track of all of these
aspects is another complex media object—a software tool of its own—with its own order and its own versions. It is a recursive process of creating an autonomous ecology.
#Syllabus can be conceived as a recursive process of versioning lists, pointing to textual, audiovisual, or other resources. With all of the linked resources publicly accessible to all; with all versions of the lists editable by all; with all of the edits attributable to
their contributors; with all versions, all linked resources, all attributions preservable by
all, just such an autonomous ecology can be made for #Syllabus. In fact, Sean Dockray, Benjamin Forster, and Public Office have already proposed such a methodology in
their Hyperreadings, a forkable readme.md plaintext document on GitHub. They write:
A text that by its nature points to other texts, the syllabus is already a relational
document acknowledging its own position within a living field of knowledge. It is
decidedly not self-contained, however it often circulates as if it were.
If a syllabus circulated as a HyperReadings document, then it could point directly to the texts and other media that it aggregates. But just as easily as it circulates, a HyperReadings syllabus could be forked into new versions: the syllabus
is changed because there is a new essay out, or because of a political disagreement, or because following the syllabus produced new suggestions. These forks
become a family tree where one can follow branches and trace epistemological
mutations.30
It is in line with this vision, which we share with the HyperReadings crew, and in line
with our analysis, that we, as amateur librarians, activists, and educators, make our
promise beyond the limits of this text.
The workflow that we are bootstrapping here will keep in mind every aspect of the media object syllabus (order, timestamp, contributor, version changes), allowing diversity
via forking and branching, and making sure that every reference listed in a syllabus
will find its reference in a catalog which will lead to the actual material, in digital form,
needed for the syllabus.
Against the enclosures of copyright, we will continue building shadow libraries and
archives of struggles, providing access to resources needed for the collective processes of education.
Against the corporate platforming of workflows and metadata, we will work with social
movements, political initiatives, educators, and researchers to aggregate, annotate,
version, and preserve lists of resources.
Against the extractivism of academia, we will take care of the material conditions that
are needed for such collective thinking to take place, both on- and offline.

30 Sean Dockray, Benjamin Forster, and Public Office, ‘README.md’, Hyperreadings, 15 February
2018, https://samiz-dat.github.io/hyperreadings/.

128

STATE MACHINES

Bibliography
Bady, Aaron. ‘The MOOC Moment and the End of Reform’, Liberal Education 99.4 (Fall 2013), https://
www.aacu.org/publications-research/periodicals/mooc-moment-and-end-reform/.
Bailey, Lucy E. ‘The “Other” Syllabus: Rendering Teaching Politics Visible in the Graduate Pedagogy
Seminar’, Feminist Teacher 20.2 (2010): 139–56.
Chafkin, Max. ‘Udacity’s Sebastian Thrun, Godfather Of Free Online Education, Changes Course’,
FastCompany, 14 November 2013, https://www.fastcompany.com/3021473/udacity-sebastianthrun-uphill-climb/.
Chatelain, Marcia. ‘How to Teach Kids About What’s Happening in Ferguson’, The Atlantic, 25 August
2014, https://www.theatlantic.com/education/archive/2014/08/how-to-teach-kids-about-whatshappening-in-ferguson/379049/.
_____. ‘Teaching the #FergusonSyllabus’, Dissent Magazine, 28 November 2014, https://www.dissentmagazine.org/blog/teaching-ferguson-syllabus/.
Ciolkowski, Laura. ‘Rape Culture Syllabus’, Public Books, 15 October 2016, https://www.publicbooks.
org/rape-culture-syllabus/.
Connolly, N.D.B. and Keisha N. Blain. ‘Trump Syllabus 2.0’, Public Books, 28 June 2016, https://www.
publicbooks.org/trump-syllabus-2-0/.
Dockray, Sean, Benjamin Forster, and Public Office. ‘README.md’, HyperReadings, 15 February 2018,
https://samiz-dat.github.io/hyperreadings/.
Federici, Silvia, and Arlen Austin (eds) The New York Wages for Housework Committee 1972-1977: History, Theory, Documents, New York: Autonomedia, 2017.
Harney, Stefano, and Fred Moten, The Undercommons: Fugitive Planning & Black Study, New York:
Autonomedia, 2013.
Heidebrink-Bruno, Adam. ‘Syllabus as Manifesto: A Critical Approach to Classroom Culture’, Hybrid
Pedagogy, 28 August 2014, http://hybridpedagogy.org/syllabus-manifesto-critical-approach-classroom-culture/.
Jenks, Angela. ‘It’s In The Syllabus’, Teaching Tools, Cultural Anthropology website, 30 June 2016,
https://culanth.org/fieldsights/910-it-s-in-the-syllabus/.
Larivière, Vincent, Stefanie Haustein, and Philippe Mongeon, ‘The Oligopoly of Academic Publishers in the Digital Era’, PLoS ONE 10.6 (10 June 2015), https://journals.plos.org/plosone/
article?id=10.1371/journal.pone.0127502/.
Leckart, Steven. ‘The Stanford Education Experiment Could Change Higher Learning Forever’, Wired,
20 March 2012, https://www.wired.com/2012/03/ff_aiclass/.
Martin, Randy. Financialization Of Daily Life, Philadelphia: Temple University Press, 2002.
Perlstein, Daniel. ‘Teaching Freedom: SNCC and the Creation of the Mississippi Freedom Schools’,
History of Education Quarterly 30.3 (Autumn 1990).
Roberts, Frank Leon. ‘Black Lives Matter: Race, Resistance, and Populist Protest’, 2016, http://www.
blacklivesmattersyllabus.com/fall2016/.
‘#StandingRockSyllabus’, NYC Stands with Standing Rock, 11 October 2016, https://nycstandswithstandingrock.wordpress.com/standingrocksyllabus/.
Shirky, Clay. ‘Ontology Is Overrated: Categories, Links, and Tags’, 2005, http://shirky.com/writings/
herecomeseverybody/ontology_overrated.html.
‘The Rise (and Fall?) Of the MOOC’, Oxbridge Essays, 14 November 2017, https://www.oxbridgeessays.
com/blog/rise-fall-mooc/.
‘TNI Syllabus: Gaming and Feminism’, The New Inquiry, 2 September 2014, https://thenewinquiry.com/
tni-syllabus-gaming-and-feminism/.
‘Trump 101’, The Chronicle of Higher Education, 19 June 2016, https://www.chronicle.com/article/
Trump-Syllabus/236824/.
‘Wages for Housework: Pamphlets – Flyers – Photographs,’ MayDay Rooms, http://maydayrooms.org/
archives/wages-for-housework/wfhw-pamphlets-flyers-photographs/.
Williamson, Ben. ‘Number Crunching: Transforming Higher Education into “Performance Data”’,
Medium, 16 August 2018, https://medium.com/ussbriefs/number-crunching-transforming-highereducation-into-performance-data-9c23debc4cf7/.



tactics in Kelty, Bodo & Allen 2018


Kelty, Bodo & Allen
Guerrilla Open Access
2018


Memory
of the
World

Edited by

Guerrilla
Open Access
Christopher
Kelty

Balazs
Bodo

Laurie
Allen

Published by Post Office Press,
Rope Press and Memory of the
World. Coventry, 2018.
© Memory of the World, papers by
respective Authors.
Freely available at:
http://radicaloa.co.uk/
conferences/ROA2
This is an open access pamphlet,
licensed under a Creative
Commons Attribution-ShareAlike
4.0 International (CC BY-SA 4.0)
license.
Read more about the license at:
https://creativecommons.org/
licenses/by-sa/4.0/
Figures and other media included
with this pamphlet may be under
different copyright restrictions.
Design by: Mihai Toma, Nick White
and Sean Worley
Printed by: Rope Press,
Birmingham

This pamphlet is published in a series
of 7 as part of the Radical Open
Access II – The Ethics of Care
conference, which took place June
26-27 at Coventry University. More
information about this conference
and about the contributors to this
pamphlet can be found at:
http://radicaloa.co.uk/conferences/
ROA2
This pamphlet was made possible due
to generous funding from the arts
and humanities research studio, The
Post Office, a project of Coventry
University’s Centre for Postdigital
Cultures and due to the combined
efforts of authors, editors, designers
and printers.

Table of Contents

Guerrilla Open Access:
Terms Of Struggle
Memory of the World
Page 4

Recursive Publics and Open Access
Christopher Kelty
Page 6

Own Nothing
Balazs Bodo
Page 16

What if We Aren't the Only
Guerrillas Out There?
Laurie Allen
Page 26

Guerilla
Open
Access:
Terms Of
Struggle

In the 1990s, the Internet offered a horizon from which to imagine what society
could become, promising autonomy and self-organization next to redistribution of
wealth and collectivized means of production. While the former was in line with the
dominant ideology of freedom, the latter ran contrary to the expanding enclosures
in capitalist globalization. This antagonism has led to epochal copyfights, where free
software and piracy kept the promise of radical commoning alive.
Free software, as Christopher Kelty writes in this pamphlet, provided a model ‘of a
shared, collective, process of making software, hardware and infrastructures that
cannot be appropriated by others’. Well into the 2000s, it served as an inspiration
for global free culture and open access movements who were speculating that
distributed infrastructures of knowledge production could be built, as the Internet
was, on top of free software.
For a moment, the hybrid world of ad-financed Internet giants—sharing code,
advocating open standards and interoperability—and users empowered by these
services, convinced almost everyone that a new reading/writing culture was
possible. Not long after the crash of 2008, these disruptors, now wary monopolists,
began to ingest smaller disruptors and close off their platforms. There was still
free software somewhere underneath, but without the ‘original sense of shared,
collective, process’. So, as Kelty suggests, it was hard to imagine that for-profit
academic publishers wouldn't try the same with open access.
Heeding Aaron Swartz’s call to civil disobedience, Guerrilla Open Access has
emerged out of the outrage over digitally-enabled enclosure of knowledge that
has allowed these for-profit academic publishers to appropriate extreme profits
that stand in stark contrast to the cuts, precarity, student debt and asymmetries
of access in education. Shadow libraries stood in for the access denied to public
libraries, drastically reducing global asymmetries in the process.

4

This radicalization of access has changed how publications
travel across time and space. Digital archiving, cataloging and
sharing is transforming what we once considered as private
libraries. Amateur librarianship is becoming public shadow
librarianship. Hybrid use, as poetically unpacked in Balazs
Bodo's reflection on his own personal library, is now entangling
print and digital in novel ways. And, as he warns, the terrain
of antagonism is shifting. While for-profit publishers are
seemingly conceding to Guerrilla Open Access, they are
opening new territories: platforms centralizing data, metrics
and workflows, subsuming academic autonomy into new
processes of value extraction.
The 2010s brought us hope and then realization how little
digital networks could help revolutionary movements. The
redistribution toward the wealthy, assisted by digitization, has
eroded institutions of solidarity. The embrace of privilege—
marked by misogyny, racism and xenophobia—this has catalyzed
is nowhere more evident than in the climate denialism of the
Trump administration. Guerrilla archiving of US government
climate change datasets, as recounted by Laurie Allen,
indicates that more technological innovation simply won't do
away with the 'post-truth' and that our institutions might be in
need of revision, replacement and repair.
As the contributions to this pamphlet indicate, the terms
of struggle have shifted: not only do we have to continue
defending our shadow libraries, but we need to take back the
autonomy of knowledge production and rebuild institutional
grounds of solidarity.

Memory of the World
http://memoryoftheworld.org

5

Recursive
Publics and
Open Access

Christopher
Kelty

Ten years ago, I published a book calledTwo Bits: The Cultural Significance of Free
Software (Kelty 2008).1 Duke University Press and my editor Ken Wissoker were
enthusiastically accommodating of my demands to make the book freely and openly
available. They also played along with my desire to release the 'source code' of the
book (i.e. HTML files of the chapters), and to compare the data on readers of the
open version to print customers. It was a moment of exploration for both scholarly
presses and for me. At the time, few authors were doing this other than Yochai Benkler
(2007) and Cory Doctorow2, both activists and advocates for free software and open
access (OA), much as I have been. We all shared, I think, a certain fanaticism of the
convert that came from recognizing free software as an historically new, and radically
different mode of organizing economic and political activity. Two Bits gave me a way
to talk not only about free software, but about OA and the politics of the university
(Kelty et al. 2008; Kelty 2014). Ten years later, I admit to a certain pessimism at the
way things have turned out. The promise of free software has foundered, though not
disappeared, and the question of what it means to achieve the goals of OA has been
swamped by concerns about costs, arcane details of repositories and versioning, and
ritual offerings to the metrics God.
When I wrote Two Bits, it was obvious to me that the collectives who built free
software were essential to the very structure and operation of a standardized
Internet. Today, free software and 'open source' refer to dramatically different
constellations of practice and people. Free software gathers around itself those
committed to the original sense of a shared, collective, process of making software,
hardware and infrastructures that cannot be appropriated by others. In political
terms, I have always identified free software with a very specific, updated, version
of classical Millian liberalism. It sustains a belief in the capacity for collective action
and rational thought as aids to establishing a flourishing human livelihood. Yet it
also preserves an outdated blind faith in the automatic functioning of meritorious
speech, that the best ideas will inevitably rise to the top. It is an updated classical
liberalism that saw in software and networks a new place to resist the tyranny of the
conventional and the taken for granted.

6

Christopher Kelty

By contrast, open source has come to mean something quite different: an ecosystem
controlled by an oligopoly of firms which maintains a shared pool of components and
frameworks that lower the costs of education, training, and software creation in the
service of establishing winner-take-all platforms. These are built on open source, but
they do not carry the principles of freedom or openness all the way through to the
platforms themselves.3 What open source has become is now almost the opposite of
free software—it is authoritarian, plutocratic, and nepotistic, everything liberalism
wanted to resist. For example, precarious labor and platforms such as Uber or Task
Rabbit are built upon and rely on the fruits of the labor of 'open source', but the
platforms that result do not follow the same principles—they are not open or free
in any meaningful sense—to say nothing of the Uber drivers or task rabbits who live
by the platforms.
Does OA face the same problem? In part, my desire to 'free the source' of my book
grew out of the unfinished business of digitizing the scholarly record. It is an irony
that much of the work that went into designing the Internet at its outset in the
1980s, such as gopher, WAIS, and the HTML of CERN, was conducted in the name
of the digital transformation of the library. But by 2007, these aims were swamped
by attempts to transform the Internet into a giant factory of data extraction. Even
in 2006-7 it was clear that this unfinished business of digitizing the scholarly record
was going to become a problem—both because it was being overshadowed by other
concerns, and because of the danger it would eventually be subjected to the very
platformization underway in other realms.
Because if the platform capitalism of today has ended up being parasitic on the
free software that enabled it, then why would this not also be true of scholarship
more generally? Are we not witnessing a transition to a world where scholarship
is directed—in its very content and organization—towards the profitability of the
platforms that ostensibly serve it?4 Is it not possible that the platforms created to
'serve science'—Elsevier's increasing acquisition of tools to control the entire lifecycle of research, or ResearchGate's ambition to become the single source for all
academics to network and share research—that these platforms might actually end up
warping the very content of scholarly production in the service of their profitability?
To put this even more clearly: OA has come to exist and scholarship is more available
and more widely distributed than ever before. But, scholars now have less control,
and have taken less responsibility for the means of production of scientific research,
its circulation, and perhaps even the content of that science.

Recursive Publics and Open Access

7

The Method of Modulation
When I wrote Two Bits I organized the argument around the idea of modulation:
free software is simply one assemblage of technologies, practices, and people
aimed at resolving certain problems regarding the relationship between knowledge
(or software tools related to knowledge) and power (Hacking 2004; Rabinow
2003). Free software as such was and still is changing as each of its elements
evolve or are recombined. Because OA derives some of its practices directly from
free software, it is possible to observe how these different elements have been
worked over in the recent past, as well as how new and surprising elements are
combined with OA to transform it. Looking back on the elements I identified as
central to free software, one can ask: how is OA different, and what new elements
are modulating it into something possibly unrecognizable?

Sharing source code
Shareable source code was a concrete and necessary achievement for free
software to be possible. Similarly, the necessary ability to circulate digital texts
is a significant achievement—but such texts are shareable in a much different way.
For source code, computable streams of text are everything—anything else is a
'blob' like an image, a video or any binary file. But scholarly texts are blobs: Word or
Portable Document Format (PDF) files. What's more, while software programmers
may love 'source code', academics generally hate it—anything less than the final,
typeset version is considered unfinished (see e.g. the endless disputes over
'author's final versions' plaguing OA).5 Finality is important. Modifiability of a text,
especially in the humanities and social sciences, is acceptable only when it is an
experiment of some kind.
In a sense, the source code of science is not a code at all, but a more abstract set
of relations between concepts, theories, tools, methods, and the disciplines and
networks of people who operate with them, critique them, extend them and try to
maintain control over them even as they are shared within these communities.

avoid the waste of 'reinventing the wheel' and of pathological
competition, allowing instead modular, reusable parts that
could be modified and recombined to build better things in an
upward spiral of innovation. The 1980s ideas of modularity,
modifiability, abstraction barriers, interchangeable units
have been essential to the creation of digital infrastructures.
To propose an 'open science' thus modulates this definition—
and the idea works in some sciences better than others.
Aside from the obviously different commercial contexts,
philosophers and literary theorists just don't think about
openness this way—theories and arguments may be used
as building blocks, but they are not modular in quite the
same way. Only the free circulation of the work, whether
for recombination or for reference and critique, remains a
sine qua non of the theory of openness proposed there. It
is opposed to a system where it is explicit that only certain
people have access to the texts (whether that be through
limitations of secrecy, or limitations on intellectual property,
or an implicit elitism).

Writing and using copyright licenses
Of all the components of free software that I analyzed, this
is the one practice that remains the least transformed—OA
texts use the same CC licenses pioneered in 2001, which
were a direct descendant of free software licenses.

For free software to make sense as a solution, those involved first had to
characterize the problem it solved—and they did so by identifying a pathology in
the worlds of corporate capitalism and engineering in the 1980s: that computer
corporations were closed organizations who re-invented basic tools and
infrastructures in a race to dominate a market. An 'open system,' by contrast, would

A novel modulation of these licenses is the OA policies (the
embrace of OA in Brazil for instance, or the spread of OA
Policies starting with Harvard and the University of California,
and extending to the EU Mandate from 2008 forward). Today
the ability to control the circulation of a text with IP rights is
far less economically central to the strategies of publishers
than it was in 2007, even if they persist in attempting to do
so. At the same time, funders, states, and universities have all
adopted patchwork policies intended to both sustain green
OA, and push publishers to innovate their own business
models in gold and hybrid OA. While green OA is a significant
success on paper, the actual use of it to circulate work pales

8

Recursive Publics and Open Access

Defining openness

Christopher Kelty

9

in comparison to the commercial control of circulation on the
one hand, and the increasing success of shadow libraries on
the other. Repositories have sprung up in every shape and
form, but they remain largely ad hoc, poorly coordinated, and
underfunded solutions to the problem of OA.

Coordinating collaborations
The collective activity of free software is ultimately the
most significant of its achievements—marrying a form of
intensive small-scale interaction amongst programmers,
with sophisticated software for managing complex objects
(version control and GitHub-like sites). There has been
constant innovation in these tools for controlling, measuring,
testing, and maintaining software.
By contrast, the collective activity of scholarship is still
largely a pre-modern affair. It is coordinated largely by the
idea of 'writing an article together' and not by working
to maintain some larger map of what a research topic,
community, or discipline has explored—what has worked and
what has not.
This focus on the coordination of collaboration seemed to
me to be one of the key advantages of free software, but it
has turned out to be almost totally absent from the practice
or discussion of OA. Collaboration and the recombination of
elements of scholarly practice obviously happens, but it does
not depend on OA in any systematic way: there is only the
counterfactual that without it, many different kinds of people
are excluded from collaboration or even simple participation
in, scholarship, something that most active scholars are
willfully ignorant of.

Fomenting a movement
I demoted the idea of a social movement to merely one
component of the success of free software, rather than let
it be—as most social scientists would have it—the principal
container for free software. They are not the whole story.

10

Christopher Kelty

Is there an OA movement? Yes and no. Librarians remain
the most activist and organized. The handful of academics
who care about it have shifted to caring about it in primarily
a bureaucratic sense, forsaking the cross-organizational
aspects of a movement in favor of activism within universities
(to which I plead guilty). But this transformation forsakes
the need for addressing the collective, collaborative
responsibility for scholarship in favor of letting individual
academics, departments, and disciplines be the focus for
such debates.
By contrast, the publishing industry works with a
phantasmatic idea of both an OA 'movement' and of the actual
practices of scholarship—they too defer, in speech if not in
practice, to the academics themselves, but at the same time
must create tools, innovate processes, establish procedures,
acquire tools and companies and so on in an effort to capture
these phantasms and to prevent academics from collectively
doing so on their own.
And what new components? The five above were central to
free software, but OA has other components that are arguably
more important to its organization and transformation.

Money, i.e. library budgets
Central to almost all of the politics and debates about OA
is the political economy of publication. From the 'bundles'
debates of the 1990s to the gold/green debates of the 2010s,
the sole source of money for publication long ago shifted into
the library budget. The relationship that library budgets
have to other parts of the political economy of research
(funding for research itself, debates about tenured/nontenured, adjunct and other temporary salary structures) has
shifted as a result of the demand for OA, leading libraries
to re-conceptualize themselves as potential publishers, and
publishers to re-conceptualize themselves as serving 'life
cycles' or 'pipeline' of research, not just its dissemination.

Recursive Publics and Open Access

11

Metrics
More than anything, OA is promoted as a way to continue
to feed the metrics God. OA means more citations, more
easily computable data, and more visible uses and re-uses of
publications (as well as 'open data' itself, when conceived of
as product and not measure). The innovations in the world
of metrics—from the quiet expansion of the platforms of the
publishers, to the invention of 'alt metrics', to the enthusiasm
of 'open science' for metrics-driven scientific methods—forms
a core feature of what 'OA' is today, in a way that was not true
of free software before it, where metrics concerning users,
downloads, commits, or lines of code were always after-thefact measures of quality, and not constitutive ones.
Other components of this sort might be proposed, but the
main point is to resist to clutch OA as if it were the beating
heart of a social transformation in science, as if it were a
thing that must exist, rather than a configuration of elements
at a moment in time. OA was a solution—but it is too easy to
lose sight of the problem.
Open Access without Recursive Publics
When we no longer have any commons, but only platforms,
will we still have knowledge as we know it? This is a question
at the heart of research in the philosophy and sociology
of knowledge—not just a concern for activism or social
movements. If knowledge is socially produced and maintained,
then the nature of the social bond surely matters to the
nature of that knowledge. This is not so different than asking
whether we will still have labor or work, as we have long known
it, in an age of precarity? What is the knowledge equivalent of
precarity (i.e. not just the existence of precarious knowledge
workers, but a kind of precarious knowledge as such)?

knowledge and power is shifting dramatically, because the costs—and the stakes—
of producing high quality, authoritative knowledge have also shifted. It is not so
powerful any longer; science does not speak truth to power because truth is no
longer so obviously important to power.
Although this is a pessimistic portrait, it may also be a sign of something yet to
come. Free software as a community, has been and still sometimes is critiqued as
being an exclusionary space of white male sociality (Nafus 2012; Massanari 2016;
Ford and Wajcman 2017; Reagle 2013). I think this critique is true, but it is less a
problem of identity than it is a pathology of a certain form of liberalism: a form that
demands that merit consists only in the content of the things we say (whether in
a political argument, a scientific paper, or a piece of code), and not in the ways we
say them, or who is encouraged to say them and who is encouraged to remain silent
(Dunbar-Hester 2014).
One might, as a result, choose to throw out liberalism altogether as a broken
philosophy of governance and liberation. But it might also be an opportunity to
focus much more specifically on a particular problem of liberalism, one that the
discourse of OA also relies on to a large extent. Perhaps it is not the case that
merit derives solely from the content of utterances freely and openly circulated,
but also from the ways in which they are uttered, and the dignity of the people
who utter them. An OA (or a free software) that embraced that principle would
demand that we pay attention to different problems: how are our platforms,
infrastructures, tools organized and built to support not just the circulation of
putatively true statements, but the ability to say them in situated and particular
ways, with respect for the dignity of who is saying them, and with the freedom to
explore the limits of that kind of liberalism, should we be so lucky to achieve it.

Do we not already see the evidence of this in the 'posttruth' of fake news, or the deliberate refusal by those in
power to countenance evidence, truth, or established
systems of argument and debate? The relationship between

12

Christopher Kelty

Recursive Publics and Open Access

13

References

¹ https://twobits.net/download/index.html

Benkler, Yochai. 2007. The Wealth of Networks: How Social Production Transforms Markets
and Freedom. Yale University Press.
Dunbar-Hester, Christina. 2014. Low Power to the People: Pirates, Protest, and Politics in
FM Radio Activism. MIT Press.
Ford, Heather, and Judy Wajcman. 2017. “‘Anyone Can Edit’, Not Everyone Does:
Wikipedia’s Infrastructure and the Gender Gap”. Social Studies of Science 47 (4):
511–527. doi:10.1177/0306312717692172.
Hacking, I. 2004. Historical Ontology. Harvard University Press.
Kelty, Christopher M. 2014. “Beyond Copyright and Technology: What Open Access Can
Tell Us About Precarity, Authority, Innovation, and Automation in the University
Today”. Cultural Anthropology 29 (2): 203–215. doi:10.14506/ca29.2.02.
——— . 2008. Two Bits: The Cultural Significance of Free Software. Durham, N.C.: Duke
University Press.
Kelty, Christopher M., et al. 2008. “Anthropology In/of Circulation: a Discussion”. Cultural
Anthropology 23 (3).
Massanari, Adrienne. 2016. “#gamergate and the Fappening: How Reddit’s Algorithm,
Governance, and Culture Support Toxic Technocultures”. New Media & Society 19 (3):
329–346. doi:10.1177/1461444815608807.
Nafus, Dawn. 2012. “‘Patches don’t have gender’: What is not open in open source
software”. New Media & Society 14, no. 4: 669–683. Visited on 04/01/2014. http://
doi:10.1177/1461444811422887.
Rabinow, Paul. 2003. Anthropos Today: Reflections on Modern Equipment. Princeton
University Press.
Reagle, Joseph. 2013. “"Free As in Sexist?" Free Culture and the Gender Gap”. First
Monday 18 (1). doi:10.5210/fm.v18i1.4291.

² https://craphound.com/

³ For example, Platform Cooperativism
https://platform.coop/directory

See for example the figure from ’Rent
Seeking by Elsevier,’ by Alejandro Posada
and George Chen (http://knowledgegap.
org/index.php/sub-projects/rent-seekingand-financialization-of-the-academicpublishing-industr preliminary-findings/)
4

See Sherpa/Romeo
http://www.sherpa.ac.uk/romeo/index.php
5

14

Christopher Kelty

Recursive Publics and Open Access

15

Own
Nothing

the contexts we were fleeing from. We made a choice to leave
behind the history, the discourses, the problems and the pain
that accumulated in the books of our library. I knew exactly
what it was I didn’t want to teach to my children once we moved.
So we did not move the books. We pretended that we would
never have to think about what this decision really meant. Up
until today. This year we needed to empty the study with the
shelves. So I’m standing in our library now, the dust covering
my face, my hands, my clothes. In the middle of the floor there
are three big crates and one small box. The small box swallows
what we’ll ultimately take with us, the books I want to show to
my son when he gets older, in case he still wants to read. One of
the big crates will be taken away by the antiquarian. The other
will be given to the school library next door. The third is the
wastebasket, where everything else will ultimately go.

Balazs
Bodo

Flow My Tears
My tears cut deep grooves into the dust on my face. Drip, drip,
drop, they hit the floor and disappear among the torn pages
scattered on the floor.
This year it dawned on us that we cannot postpone it any longer:
our personal library has to go. Our family moved countries
more than half a decade ago, we switched cultures, languages,
and chose another future. But the past, in the form of a few
thousand books in our personal library, was still neatly stacked
in our old apartment, patiently waiting, books that we bought
and enjoyed — and forgot; books that we bought and never
opened; books that we inherited from long-dead parents and
half-forgotten friends. Some of them were important. Others
were relevant at one point but no longer, yet they still reminded
us who we once were.
When we moved, we took no more than two suitcases of personal
belongings. The books were left behind. The library was like
a sick child or an ailing parent, it hung over our heads like an
unspoken threat, a curse. It was clear that sooner or later
something had to be done about it, but none of the options
available offered any consolation. It made no sense to move
three thousand books to the other side of this continent. We
decided to emigrate, and not to take our past with us, abandon

16

Balazs Bodo

Drip, drip, drip, my tears flow as I throw the books into this
last crate, drip, drip, drop. Sometimes I look at my partner,
working next to me, and I can see on her face that she is going
through the same emotions. I sometimes catch the sight of
her trembling hand, hesitating for a split second where a book
should ultimately go, whether we could, whether we should
save that particular one, because… But we either save them all
or we are as ruthless as all those millions of people throughout
history, who had an hour to pack their two suitcases before they
needed to leave. Do we truly need this book? Is this a book we’ll
want to read? Is this book an inseparable part of our identity?
Did we miss this book at all in the last five years? Is this a text
I want to preserve for the future, for potential grandchildren
who may not speak my mother tongue at all? What is the function
of the book? What is the function of this particular book in my
life? Why am I hesitating throwing it out? Why should I hesitate
at all? Drop, drop, drop, a decision has been made. Drop, drop,
drop, books are falling to the bottom of the crates.
We are killers, gutting our library. We are like the half-drown
sailor, who got entangled in the ropes, and went down with the
ship, and who now frantically tries to cut himself free from the
detritus that prevents him to reach the freedom of the surface,
the sunlight and the air.

Own Nothing

17

advantages of a fully digital book future. What I see now is the emergence of a strange
and shapeshifting-hybrid of diverse physical and electronic objects and practices,
where the relative strengths and weaknesses of these different formats nicely
complement each other.
This dawned on me after we had moved into an apartment without a bookshelf. I grew
up in a flat that housed my parents’ extensive book collection. I knew the books by their
cover and from time to time something made me want to take it from the shelf, open
it and read it. This is how I discovered many of my favorite books and writers. With
the e-reader, and some of the best shadow libraries at hand, I felt the same at first. I
felt liberated. I could experiment without cost or risk, I could start—or stop—a book,
I didn’t have to consider the cost of buying and storing a book that was ultimately
not meant for me. I could enjoy the books without having to carry the burden and
responsibility of ownership.

Own Nothing, Have Everything
Do you remember Napster’s slogan after it went legit, trying to transform itself into
a legal music service around 2005? ‘Own nothing, have everything’ – that was the
headline that was supposed to sell legal streaming music. How stupid, I thought. How
could you possibly think that lack of ownership would be a good selling point? What
does it even mean to ‘have everything’ without ownership? And why on earth would
not everyone want to own the most important constituents of their own self, their
own identity? The things I read, the things I sing, make me who I am. Why wouldn’t I
want to own these things?
How revolutionary this idea had been I reflected as I watched the local homeless folks
filling up their sacks with the remains of my library. How happy I would be if I could
have all this stuff I had just thrown away without actually having to own any of it. The
proliferation of digital texts led me to believe that we won’t be needing dead wood
libraries at all, at least no more than we need vinyl to listen to, or collect music. There
might be geeks, collectors, specialists, who for one reason or another still prefer the
physical form to the digital, but for the rest of us convenience, price, searchability, and
all the other digital goodies give enough reason not to collect stuff that collects dust.

Did you notice how deleting an epub file gives you a different feeling than throwing
out a book? You don’t have to feel guilty, you don’t have to feel anything at all.
So I was reading, reading, reading like never before. But at that time my son was too
young to read, so I didn’t have to think about him, or anyone else besides myself. But
as he was growing, it slowly dawned on me: without these physical books how will I be
able to give him the same chance of serendipity, and of discovery, enchantment, and
immersion that I got in my father’s library? And even later, what will I give him as his
heritage? Son, look into this folder of PDFs: this is my legacy, your heritage, explore,
enjoy, take pride in it?
Collections of anything, whether they are art, books, objects, people, are inseparable
from the person who assembled that collection, and when that person is gone, the
collection dies, as does the most important inroad to it: the will that created this
particular order of things has passed away. But the heavy and unavoidable physicality
of a book collection forces all those left behind to make an effort to approach, to
force their way into, and try to navigate that garden of forking paths that is someone
else’s library. Even if you ultimately get rid of everything, you have to introduce
yourself to every book, and let every book introduce itself to you, so you know what
you’re throwing out. Even if you’ll ultimately kill, you will need to look into the eyes of
all your victims.
With a digital collection that’s, of course, not the case.

I was wrong to think that. I now realize that the future is not fully digital, it is more
a physical-digital hybrid, in which the printed book is not simply an endangered
species protected by a few devoted eccentrics who refuse to embrace the obvious

The e-book is ephemeral. It has little past and even less chance to preserve the
fingerprints of its owners over time. It is impersonal, efficient, fast, abundant, like

18

Own Nothing

Balazs Bodo

19

fast food or plastic, it flows through the hand like sand. It lacks the embodiment, the
materiality which would give it a life in a temporal dimension. If you want to network the
dead and the unborn, as is the ambition of every book, then you need to print and bind,
and create heavy objects that are expensive, inefficient and a burden. This burden
subsiding in the object is the bridge that creates the intergenerational dimension,
that forces you to think of the value of a book.
Own nothing, have nothing. Own everything, and your children will hate you when
you die.
I have to say, I’m struggling to find a new balance here. I started to buy books again,
usually books that I’d already read from a stolen copy on-screen. I know what I want
to buy, I know what is worth preserving. I know what I want to show to my son, what
I want to pass on, what I would like to take care of over time. Before, book buying for
me was an investment into a stranger. Now that thrill is gone forever. I measure up
the merchandise well beforehand, I build an intimate relationship, we make love again
and again, before moving in together.
It is certainly a new kind of relationship with the books I bought since I got my e-reader.
I still have to come to terms with the fact that the books I bought this way are rarely
opened, as I already know them, and their role is not to be read, but to be together.
What do I buy, and what do I get? Temporal, existential security? The chance of
serendipity, if not for me, then for the people around me? The reassuring materiality
of the intimacy I built with these texts through another medium?
All of these and maybe more. But in any case, I sense that this library, the physical
embodiment of a physical-electronic hybrid collection with its unopened books and
overflowing e-reader memory cards, is very different from the library I had, and the
library I’m getting rid of at this very moment. The library that I inherited, the library
that grew organically from the detritus of the everyday, the library that accumulated
books similar to how the books accumulated dust, as is the natural way of things, this
library was full of unknowns, it was a library of potentiality, of opportunities, of trips
waiting to happen. This new, hybrid library is a collection of things that I’m familiar with.
I intimately know every piece, they hold little surprise, they offer few discoveries — at
least for me. The exploration, the discovery, the serendipity, the pre-screening takes
place on the e-reader, among the ephemeral, disposable PDFs and epubs.

We Won
This new hybrid model is based on the cheap availability of digital books. In my case, the
free availability of pirated copies available through shadow libraries. These libraries
don’t have everything on offer, but they have books in an order of magnitude larger
than I’ll ever have the time and chance to read, so they offer enough, enough for me
to fill up hard drives with books I want to read, or at least skim, to try, to taste. As if I
moved into an infinite bookstore or library, where I can be as promiscuous, explorative,
nomadic as I always wanted to be. I can flirt with books, I can have a quickie, or I can
leave them behind without shedding a single tear.
I don’t know how this hybrid library, and this analogue-digital hybrid practice of reading
and collecting would work without the shadow libraries which make everything freely
accessible. I rely on their supply to test texts, and feed and grow my print library.
E-books are cheaper than their print versions, but they still cost money, carry a
risk, a cost of experimentation. Book-streaming, the flat-rate, the all-you-can-eat
format of accessing books is at the moment only available to audiobooks, but rarely
for e-books. I wonder why.
Did you notice that there are no major book piracy lawsuits?

Have everything, and own a few.

20

Balazs Bodo

Own Nothing

21

Of course there is the lawsuit against Sci-Hub and Library Genesis in New York, and
there is another one in Canada against aaaaarg, causing major nuisance to those who
have been named in these cases. But this is almost negligible compared to the high
profile wars the music and audiovisual industries waged against Napster, Grokster,
Kazaa, megaupload and their likes. It is as if book publishers have completely given up on
trying to fight piracy in the courts, and have launched a few lawsuits only to maintain
the appearance that they still care about their digital copyrights. I wonder why.
I know the academic publishing industry slightly better than the mainstream popular
fiction market, and I have the feeling that in the former copyright-based business
models are slowly being replaced by something else. We see no major anti-piracy
efforts from publishers, not because piracy is non-existent — on the contrary, it is
global, and it is big — but because the publishers most probably realized that in the
long run the copyright-based exclusivity model is unsustainable. The copyright wars
of the last two decades taught them that law cannot put an end to piracy. As the
Sci-Hub case demonstrates, you can win all you want in a New York court, but this
has little real-world effect as long as the conditions that attract the users to the
shadow libraries remain.
Exclusivity-based publishing business models are under assault from other sides as
well. Mandated open access in the US and in the EU means that there is a quickly
growing body of new research for the access of which publishers cannot charge
money anymore. LibGen and Sci-Hub make it harder to charge for the back catalogue.
Their sheer existence teaches millions on what uncurtailed open access really is, and
makes it easier for university libraries to negotiate with publishers, as they don’t have
to worry about their patrons being left without any access at all.
The good news is that radical open access may well be happening. It is a less and less
radical idea to have things freely accessible. One has to be less and less radical to
achieve the openness that has been long overdue. Maybe it is not yet obvious today
and the victory is not yet universal, maybe it’ll take some extra years, maybe it won’t
ever be evenly distributed, but it is obvious that this genie, these millions of books on
everything from malaria treatments to critical theory, cannot be erased, and open
access will not be undone, and the future will be free of access barriers.

We Are Not Winning at All
But did we really win? If publishers are happy to let go of access control and copyright,
it means that they’ve found something that is even more profitable than selling
back to us academics the content that we have produced. And this more profitable
something is of course data. Did you notice where all the investment in academic
publishing went in the last decade? Did you notice SSRN, Mendeley, Academia.edu,
ScienceDirect, research platforms, citation software, manuscript repositories, library
systems being bought up by the academic publishing industry? All these platforms
and technologies operate on and support open access content, while they generate
data on the creation, distribution, and use of knowledge; on individuals, researchers,
students, and faculty; on institutions, departments, and programs. They produce data
on the performance, on the success and the failure of the whole domain of research
and education. This is the data that is being privatized, enclosed, packaged, and sold
back to us.

Drip, drip, drop, its only nostalgia. My heart is light, as I don’t have to worry about
gutting the library. Soon it won’t matter at all.

Taylorism reached academia. In the name of efficiency, austerity, and transparency,
our daily activities are measured, profiled, packaged, and sold to the highest bidder.
But in this process of quantification, knowledge on ourselves is lost for us, unless we
pay. We still have some patchy datasets on what we do, on who we are, we still have
this blurred reflection in the data-mirrors that we still do control. But this path of
self-enlightenment is quickly waning as less and less data sources about us are freely
available to us.

22

Own Nothing

Who is downloading books and articles? Everyone. Radical open access? We won,
if you like.

Balazs Bodo

23

I strongly believe that information on the self is the foundation
of self-determination. We need to have data on how we operate,
on what we do in order to know who we are. This is what is being
privatized away from the academic community, this is being
taken away from us.
Radical open access. Not of content, but of the data about
ourselves. This is the next challenge. We will digitize every page,
by hand if we must, that process cannot be stopped anymore.
No outside power can stop it and take that from us. Drip, drip,
drop, this is what I console myself with, as another handful of
books land among the waste.
But the data we lose now will not be so easy to reclaim.

24

Balazs Bodo

Own Nothing

25

What if
We Aren't
the Only
Guerrillas
Out
There?
Laurie
Allen

My goal in this paper is to tell the story
of a grass-roots project called Data
Refuge (http://www.datarefuge.org)
that I helped to co-found shortly after,
and in response to, the Trump election
in the USA. Trump’s reputation as
anti-science, and the promise that his
administration would elevate people into
positions of power with a track record
of distorting, hiding, or obscuring the
scientific evidence of climate change
caused widespread concern that
valuable federal data was now in danger.
The Data Refuge project grew from the
work of Professor Bethany Wiggin and
the graduate students within the Penn
Program in Environmental Humanities
(PPEH), notably Patricia Kim, and was
formed in collaboration with the Penn
Libraries, where I work. In this paper, I
will discuss the Data Refuge project, and
call attention to a few of the challenges
inherent in the effort, especially as
they overlap with the goals of this
collective. I am not a scholar. Instead,
I am a librarian, and my perspective as
a practicing informational professional
informs the way I approach this paper,
which weaves together the practical
and technical work of ‘saving data’ with
the theoretical, systemic, and ethical
issues that frame and inform what we
have done.

I work as the head of a relatively small and new department within the libraries
of the University of Pennsylvania, in the city of Philadelphia, Pennsylvania, in the
US. I was hired to lead the Digital Scholarship department in the spring of 2016,
and most of the seven (soon to be eight) people within Digital Scholarship joined
the library since then in newly created positions. Our group includes a mapping
and spatial data librarian and three people focused explicitly on supporting the
creation of new Digital Humanities scholarship. There are also two people in the
department who provide services connected with digital scholarly open access
publishing, including the maintenance of the Penn Libraries’ repository of open
access scholarship, and one Data Curation and Management Librarian. This
Data Librarian, Margaret Janz, started working with us in September 2016, and
features heavily into the story I’m about to tell about our work helping to build Data
Refuge. While Margaret and I were the main people in our department involved in
the project, it is useful to understand the work we did as connected more broadly
to the intersection of activities—from multimodal, digital, humanities creation to
open access publishing across disciplines—represented in our department in Penn.
At the start of Data Refuge, Professor Wiggin and her students had already been
exploring the ways that data about the environment can empower communities
through their art, activism, and research, especially along the lower Schuylkill
River in Philadelphia. They were especially attuned to the ways that missing data,
or data that is not collected or communicated, can be a source of disempowerment.
After the Trump election, PPEH graduate students raised the concern that the
political commitments of the new administration would result in the disappearance
of environmental and climate data that is vital to work in cities and communities
around the world. When they raised this concern with the library, together we cofounded Data Refuge. It is notable to point out that, while the Penn Libraries is a
large and relatively well-resourced research library in the United States, it did not
have any automatic way to ingest and steward the data that Professor Wiggin and
her students were concerned about. Our system of acquiring, storing, describing
and sharing publications did not account for, and could not easily handle, the
evident need to take in large quantities of public data from the open web and make
them available and citable by future scholars. Indeed, no large research library
was positioned to respond to this problem in a systematic way, though there was
general agreement that the community would like to help.
The collaborative, grass-roots movement that formed Data Refuge included many
librarians, archivists, and information professionals, but it was clear from the
beginning that my own profession did not have in place a system for stewarding
these vital information resources, or for treating them as ‘publications’ of the

26

Laurie Allen

What if We Aren't the Only Guerrillas Out There?

27

federal government. This fact was widely understood by various members of our
profession, notably by government document librarians, who had been calling
attention to this lack of infrastructure for years. As Government Information
Librarian Shari Laster described in a blog post in November of 2016, government
documents librarians have often felt like they are ‘under siege’ not from political
forces, but from the inattention to government documents afforded by our systems
and infrastructure. Describing the challenges facing the profession in light of the
2016 election, she commented: “Government documents collections in print are
being discarded, while few institutions are putting strategies in place for collecting
government information in digital formats. These strategies are not expanding in
tandem with the explosive proliferation of these sources, and certainly not in pace
with the changing demands for access from public users, researchers, students,
and more.” (Laster 2016) Beyond government documents librarians, our project
joined efforts that were ongoing in a huge range of communities, including: open
data and open science activists; archival experts working on methods of preserving
born-digital content; cultural historians; federal data producers and the archivists
and data scientists they work with; and, of course, scientists.

the scientific record to fight back, in a concrete way, against
an anti-fact establishment. By downloading data and moving
it into the Internet Archive and the Data Refuge repository,
volunteers were actively claiming the importance of accurate
records in maintaining or creating a just society.

This distributed approach to the work of downloading and saving the data
encouraged people to see how they were invested in environmental and scientific
data, and to consider how our government records should be considered the
property of all of us. Attending Data Rescue events was a way for people who value

Of course, access to data need not rely on its inclusion in
a particular repository. As is demonstrated so well in other
contexts, technological methods of sharing files can make
the digital repositories of libraries and archives seem like a
redundant holdover from the past. However, as I will argue
further in this paper, the data that was at risk in Data Refuge
differed in important ways from the contents of what Bodó
refers to as ‘shadow libraries’ (Bodó 2015). For opening
access to copies of journals articles, shadow libraries work
perfectly. However, the value of these shadow libraries relies
on the existence of the widely agreed upon trusted versions.
If in doubt about whether a copy is trustworthy, scholars
can turn to more mainstream copies, if necessary. This was
not the situation we faced building Data Refuge. Instead, we
were often dealing with the sole public, authoritative copy
of a federal dataset and had to assume that, if it were taken
down, there would be no way to check the authenticity of
other copies. The data was not easily pulled out of systems
as the data and the software that contained them were often
inextricably linked. We were dealing with unique, tremendously
valuable, but often difficult-to-untangle datasets rather than
neatly packaged publications. The workflow we established
was designed to privilege authenticity and trustworthiness
over either the speed of the copying or the easy usability of
the resulting data. 2 This extra care around authenticity was
necessary because of the politicized nature of environmental
data that made many people so worried about its removal
after the election. It was important that our project
supported the strongest possible scientific arguments that
could be made with the data we were ‘saving’. That meant
that our copies of the data needed to be citable in scientific
scholarly papers, and that those citations needed to be
able to withstand hostile political forces who claim that the
science of human-caused climate change is ‘uncertain’. It

28

What if We Aren't the Only Guerrillas Out There?

Born from the collaboration between Environmental Humanists and Librarians,
Data Refuge was always an effort both at storytelling and at storing data. During
the first six months of 2017, volunteers across the US (and elsewhere) organized
more than 50 Data Rescue events, with participants numbering in the thousands.
At each event, a group of volunteers used tools created by our collaborators at
the Environmental and Data Governance Initiative (EDGI) (https://envirodatagov.
org/) to support the End of Term Harvest (http://eotarchive.cdlib.org/) project
by identifying seeds from federal websites for web archiving in the Internet
Archive. Simultaneously, more technically advanced volunteers wrote scripts to
pull data out of complex data systems, and packaged that data for longer term
storage in a repository we maintained at datarefuge.org. Still other volunteers
held teach-ins, built profiles of data storytellers, and otherwise engaged in
safeguarding environmental and climate data through community action (see
http://www.ppehlab.org/datarefugepaths). The repository at datarefuge.org that
houses the more difficult data sources has been stewarded by myself and Margaret
Janz through our work at Penn Libraries, but it exists outside the library’s main
technical infrastructure.1

Laurie Allen

29

was easy to imagine in the Autumn of 2016, and even easier
to imagine now, that hostile actors might wish to muddy the
science of climate change by releasing fake data designed
to cast doubt on the science of climate change. For that
reasons, I believe that the unique facts we were seeking
to safeguard in the Data Refuge bear less similarity to the
contents of shadow libraries than they do to news reports
in our current distributed and destabilized mass media
environment. Referring to the ease of publishing ideas on the
open web, Zeynep Tufecki wrote in a recent column, “And
sure, it is a golden age of free speech—if you can believe your
lying eyes. Is that footage you’re watching real? Was it really
filmed where and when it says it was? Is it being shared by altright trolls or a swarm of Russian bots? Was it maybe even
generated with the help of artificial intelligence? (Yes, there
are systems that can create increasingly convincing fake
videos.)” (Tufekci 2018). This was the state we were trying to
avoid when it comes to scientific data, fearing that we might
have the only copy of a given dataset without solid proof that
our copy matched the original.
If US federal websites cease functioning as reliable stewards
of trustworthy scientific data, reproducing their data
without a new model of quality control risks producing the
very censorship that our efforts are supposed to avoid,
and further undermining faith in science. Said another way,
if volunteers duplicated federal data all over the Internet
without a trusted system for ensuring the authenticity of
that data, then as soon as the originals were removed, a sea of
fake copies could easily render the original invisible, and they
would be just as effectively censored. “The most effective
forms of censorship today involve meddling with trust and
attention, not muzzling speech itself.” (Tufekci 2018).
These concerns about the risks of open access to data should
not be understood as capitulation to the current marketdriven approach to scholarly publishing, nor as a call for
continuation of the status quo. Instead, I hope to encourage
continuation of the creative approaches to scholarship
represented in this collective. I also hope the issues raised in

30

Laurie Allen

Data Refuge will serve as a call to take greater responsibility for the systems into
which scholarship flows and the structures of power and assumptions of trust (by
whom, of whom) that scholarship relies on.
While plenty of participants in the Data Refuge community posited scalable
technological approaches to help people trust data, none emerged that were
strong enough to risk further undermining faith in science that a malicious attack
might cause. Instead of focusing on technical solutions that rely on the existing
systems staying roughly as they are, I would like to focus on developing networks
that explore different models of trust in institutions, and that honor the values
of marginalized and indigenous people. For example, in a recent paper, Stacie
Williams and Jarrett Drake describe the detailed decisions they made to establish
and become deserving of trust in supporting the creation of an Archive of Police
Violence in Cleveland (Williams and Drake 2017). The work of Michelle Caswell and
her collaborators on exploring post-custodial archives, and on engaging in radical
empathy in the archives provide great models of the kind of work that I believe is
necessary to establish new models of trust that might help inform new modes of
sharing and relying on community information (Caswell and Cifor 2016).
Beyond seeking new ways to build trust, it has become clear that new methods
are needed to help filter and contextualize publications. Our current reliance
on a few for-profit companies to filter and rank what we see of the information
landscape has proved to be tremendously harmful for the dissemination of facts,
and has been especially dangerous to marginalized communities (Noble 2018).
While the world of scholarly humanities publishing is doing somewhat better than
open data or mass media, there is still a risk that without new forms of filtering and
establishing quality and trustworthiness, good ideas and important scholarship will
be lost in the rankings of search engines and the algorithms of social media. We
need new, large scale systems to help people filter and rank the information on the
open web. In our current situation, according to media theorist dana boyd, “[t]he
onus is on the public to interpret what they see. To self-investigate. Since we live
in a neoliberal society that prioritizes individual agency, we double down on media
literacy as the ‘solution’ to misinformation. It’s up to each of us as individuals to
decide for ourselves whether or not what we’re getting is true.” (boyd 2018)
In closing, I’ll return to the notion of Guerrilla warfare that brought this panel
together. While some of our collaborators and some in the press did use the term
‘Guerrilla archiving’ to describe the data rescue efforts (Currie and Paris 2017),
I generally did not. The work we did was indeed designed to take advantage of
tactics that allow a small number of actors to resist giant state power. However,

What if We Aren't the Only Guerrillas Out There?

31

if anything, the most direct target of these guerrilla actions in my mind was not
the Trump administration. Instead, the action was designed to prompt responses
by the institutions where many of us work and by communities of scholars and
activists who make up these institutions. It was designed to get as many people as
possible working to address the complex issues raised by the two interconnected
challenges that the Data Refuge project threw into relief. The first challenge,
of course, is the need for new scientific, artistic, scholarly and narrative ways of
contending with the reality of global, human-made climate change. And the second
challenge, as I’ve argued in this paper, is that our systems of establishing and
signaling trustworthiness, quality, reliability and stability of information are in dire
need of creative intervention as well. It is not just publishing but all of our systems
for discovering, sharing, acquiring, describing and storing that scholarship that
need support, maintenance, repair, and perhaps in some cases, replacement. And
this work will rely on scholars, as well as expert information practitioners from a
range of fields (Caswell 2016).

¹ At the time of this writing, we are working
on un-packing and repackaging the data
within Data Refuge for eventual inclusion
in various Research Library Repositories.

Ideally, of course, all federally produced
datasets would be published in neatly
packaged and more easily preservable
containers, along with enough technical
checks to ensure their validity (hashes,
checksums, etc.) and each agency would
create a periodical published inventory of
datasets. But the situation we encountered
with Data Refuge did not start us in
anything like that situation, despite the
hugely successful and important work of
the employees who created and maintained
data.gov. For a fuller view of this workflow,
see my talk at CSVConf 2017 (Allen 2017).

2

Closing note: The workflow established and used at Data Rescue events was
designed to tackle this set of difficult issues, but needed refinement, and was retired
in mid-2017. The Data Refuge project continues, led by Professor Wiggin and her
colleagues and students at PPEH, who are “building a storybank to document
how data lives in the world – and how it connects people, places, and non-human
species.” (“DataRefuge” n.d.) In addition, the set of issues raised by Data Refuge
continue to inform my work and the work of many of our collaborators.

32

Laurie Allen

What if We Aren't the Only Guerrillas Out There?

33

References
Allen, Laurie. 2017. “Contexts and Institutions.” Paper presented at csv,conf,v3, Portland,
Oregon, May 3rd 2017. Accessed May 20, 2018. https://youtu.be/V2gwi0CRYto.
Bodo, Balazs. 2015. “Libraries in the Post - Scarcity Era.” In Copyrighting Creativity:
Creative Values, Cultural Heritage Institutions and Systems of Intellectual Property,
edited by Porsdam. Routledge.
boyd, danah. 2018. “You Think You Want Media Literacy… Do You?” Data & Society: Points.
March 9, 2018. https://points.datasociety.net/you-think-you-want-media-literacy-doyou-7cad6af18ec2.
Caswell, Michelle. 2016. “‘The Archive’ Is Not an Archives: On Acknowledging the
Intellectual Contributions of Archival Studies.” Reconstruction: Studies in
Contemporary Culture 16:1 (2016) (special issue “Archives on Fire”),
http://reconstruction.eserver.org/Issues/161/Caswell.shtml.
Caswell, Michelle, and Marika Cifor. 2016. “From Human Rights to Feminist Ethics: Radical
Empathy in the Archives.” Archivaria 82 (0): 23–43.
Currie, Morgan, and Britt Paris. 2017. “How the ‘Guerrilla Archivists’ Saved History – and
Are Doing It Again under Trump.” The Conversation (blog). February 21, 2017.
https://theconversation.com/how-the-guerrilla-archivists-saved-history-and-aredoing-it-again-under-trump-72346.
“DataRefuge.” n.d. PPEH Lab. Accessed May 21, 2018.
http://www.ppehlab.org/datarefuge/.
“DataRescue Paths.” n.d. PPEH Lab. Accessed May 20, 2018.
http://www.ppehlab.org/datarefugepaths/.
“End of Term Web Archive: U.S. Government Websites.” n.d. Accessed May 20, 2018.
http://eotarchive.cdlib.org/.
“Environmental Data and Governance Initiative.” n.d. EDGI. Accessed May 19, 2018.
https://envirodatagov.org/.
Laster, Shari. 2016. “After the Election: Libraries, Librarians, and the Government - Free
Government Information (FGI).” Free Government Information (FGI). November 23,
2016. https://freegovinfo.info/node/11451.
Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce
Racism. New York: NYU Press.
Tufekci, Zeynep. 2018. “It’s the (Democracy-Poisoning) Golden Age of Free Speech.”
WIRED. Accessed May 20, 2018.
https://www.wired.com/story/free-speech-issue-tech-turmoil-new-censorship/.
“Welcome - Data Refuge.” n.d. Accessed May 20, 2018. https://www.datarefuge.org/.
Williams, Stacie M, and Jarrett Drake. 2017. “Power to the People: Documenting Police
Violence in Cleveland.” Journal of Critical Library and Information Studies 1 (2).
https://doi.org/10.24242/jclis.v1i2.33.

34

Laurie Allen

Guerrilla
Open
Access



tactics in Mars & Medak 2019


Mars & Medak
Against Innovation
2019


Against Innovation: Compromised institutional agency and acts of custodianship
Marcell Mars and Tomislav Medak

abstract
In this essay we reflect on the historic crisis of the university and the public library as two
modern institutions tasked with providing universal access to knowledge and education.
This crisis, precipitated by pushes to marketization, technological innovation and
financialization in universities and libraries, has prompted the emergence of shadow
libraries as collective disobedient practices of maintenance and custodianship. In their
illegal acts of reversing property into commons, commodification into care, we detect a
radical gesture comparable to that of the historical avant-garde. To better understand how
the university and the public library ended up in this crisis, we re-trace their development
starting with the capitalist modernization around the turn of the 20th century, a period of
accelerated technological innovation that also birthed historical avant-garde. Drawing on
Perry Anderson’s ‘Modernity and Revolution’, we interpret that uniquely creative period
as a period of ambivalence toward an ‘unpredictable political future’ that was open to
diverging routes of social development. We situate the later re-emergence of avant-garde
practices in the 1960s as an attempt to subvert the separations that a mature capitalism
imposes on social reality. In the present, we claim, the radicality equivalent to the avantgarde is to divest from the disruptive dynamic of innovation and focus on the repair,
maintenance and care of the broken social world left in techno-capitalism’s wake.
Comparably, the university and the public library should be able to claim the radical
those gesture of slowdown and custodianship too, against the imperative of innovation
imposed on them by policymakers and managers.

Custodians.online, the first letter
On 30 November, 2015 a number of us shadow librarians who advocate, build
and maintain ‘shadow libraries’, i.e. online infrastructures allowing users to
digitise, share and debate digital texts and collections, published a letter
article | 345

ephemera: theory & politics in organization


(Custodians.online, 2015) in support of two of the largest user-created
repositories of pirated textbooks and articles on the Internet – Library Genesis
and Science Hub. Library Genesis and Science Hub’s web domain names were
taken down after a New York court issued an injunction following a copyright
infringement suit filed by the largest commercial academic publisher in the
world – Reed Elsevier. It is a familiar trajectory that a shared digital resource,
once it grows in relevance and size, gets taken down after a court decision.
Shadow libraries are no exception.
The world of higher education and science is structured by uneven development.
The world’s top-ranked universities are concentrated in a dozen rich countries
(Times Higher Education, 2017), commanding most of the global investment
into higher education and research. The oligopoly of commercial academic
publishers is headquartered in no more than half of those. The excessive rise of
subscription fees has made it prohibitively expensive even for the richest
university libraries of the Global North to provide access to all the journals they
would need to (Sample, 2012), drawing protest from academics all over the world
against the outrageously high price tag that Reed Elsevier puts on their work
(‘The Cost of Knowledge’, 2012). Against this concentration of economic might
and exclusivity to access, stands the fact that the rest of the world has little access
to the top-ranked research universities (Baty, 2017; Henning, 2017) and that the
poor universities are left with no option but to tacitly encourage their students to
use shadow libraries (Liang, 2012). The editorial director of global rankings at the
Times Higher Education Phil Baty minces no words when he bluntly states ‘that
money talks in global higher education seems … to be self-evident’ (Baty, 2017).
Uneven economic development reinforces global uneven development in higher
education and science – and vice versa. It is in the face of this combined
economic and educational unevenness, that Library Genesis and Science Hub,
two repositories for a decommodified access to otherwise paywalled resources,
attain a particular import for students, academics and researchers worldwide.
And it is in the face of combined economic and educational unevenness, that
Library Genesis and Science Hub continue to brave the court decisions,
continuously changing their domain names, securing ways of access beyond the
World Wide Web and ensuring robust redundancy of the materials in their
repositories.
The Custodians.online letter highlights two circumstances in this antagonism
that cut to the core of the contradictions of reproduction within academia in the
present. The first is the contrast between the extraction of extreme profits from
academia through inflated subscription prices and the increasingly precarious
conditions of studying, teaching and researching:

346 | article

Marcell Mars and Tomislav Medak

Against innovation

Consider Elsevier, the largest scholarly publisher, whose 37% profit margin stands
in sharp contrast to the rising fees, expanding student loan debt and poverty-level
wages for adjunct faculty. Elsevier owns some of the largest databases of academic
material, which are licensed at prices so scandalously high that even Harvard, the
richest university of the global north, has complained that it cannot afford them
any longer. (Custodians.online, 2015: n.p.)

The enormous profits accruing to an oligopoly of academic publishers are a
result of a business model premised on harvesting and enclosing the scholarly
writing, peer reviewing and editing is done mostly for free by academics who are
often-times struggling to make their ends meet in the higher education
environment (Larivière et al., 2015).
The second circumstance is that shadow libraries invert the property relation of
copyright that allows publishers to exclude all those students, teachers and
researchers who don’t have institutional access to scholarly writing and yet need
that access for their education and research, their work and their livelihood in
conditions of heightened precarity:
This is the other side of 37% profit margins: our knowledge commons grows in
the fault lines of a broken system. We are all custodians of knowledge, custodians
of the same infrastructures that we depend on for producing knowledge,
custodians of our fertile but fragile commons. To be a custodian is, de facto, to
download, to share, to read, to write, to review, to edit, to digitize, to archive, to
maintain libraries, to make them accessible. It is to be of use to, not to make
property of, our knowledge commons.) (Custodians.online, 2015)

Shadow libraries thus perform an inversion that replaces the ability of ownership
to exclude, with the practice of custodianship (notion implying both the labor of
preservation of cultural artifacts and the most menial and invisible labor of daily
maintenance and cleaning of physical structures) that makes one useful to a
resource held in common and the infrastructures that sustain it.
These two circumstances – antagonism between value extraction and precarity
and antagonism between exclusive property and collective custodianship – signal
a deeper-running crisis of two institutions of higher education and research that
are caught in a joint predicament: the university and the library. This crisis is a
reflection of the impossible challenges placed on them by the capitalist
development, with its global division of labor and its looming threat of massive
technological unemployment, and the response of national policymakers to those
challenges: Are they able to create a labor force that will be able to position itself
in the global labor market with ever fewer jobs to go around? Can they do it with
less money? Can they shift the cost, risk and responsibility for social challenges
to individual students and patrons, who are now facing the prospect of their
investment in education never working out? Under these circumstances, the
article | 347



imperative is that these institutions have to re-invent themselves, that they have
to innovate in order to keep up with the disruptive course and accelerated the
pace of change.

Custodianship and repair
In what follows we will argue against submitting to this imperative of innovation.
Starting from the conditions from which shadow libraries emerge, as laid out in
the first Custodians.online letter, we claim that the historical trajectory of the
university and the library demands that they now embrace a position of
disobedience. They need to go back to their universalizing mission of providing
access to knowledge and education unconditionally to all members of society.
That universalism is a powerful political gesture. An infinite demand (Critchley,
2007) whereby they seek to abolish exclusions and affirm the legacy of the radical
equality they have built as part of the history of emancipatory struggles and
advances since the revolutions of 1789 and 1848. At the core of this legacy is a
promise that the capacity of members of society to collectively contest and claim
rights so as to become free, equal and solidaric is underwritten by a capacity to
have informed opinion, attain knowledge and produce a pedagogy of their own.
The library and the university stand in a historical trajectory of revolutions, a
series of historical discontinuities. The French Revolution seized the holdings of
the aristocracy and the Church, and brought a deluge of books to the Blibliotèque
Nationale and the municipal libraries across France (Harris, 1999). The Chartism
might have failed in its political campaign in 1848, but was successful in setting
up the reading rooms and emancipating the working class education from moral
inculcation imposed on them by the ruling classes (Johnson, 2014). The tension
between continuity and discontinuity that comes with disruptive changes was
written into their history long before the present imperative of innovation. And
yet, if these institutions are social infrastructures that have ever since sustained
the production of knowledge and pedagogy by re-producing the organizational
and material conditions of their production, they warn us against taking that
imperative of innovation at face value.
The entrepreneurial language of innovation is the vernacular of global technocapitalism in the present. Radical disruption is celebrated for its ability to depose
old monopolies and birth new ones, to create new markets and its first movers to
replace old ones (Bower and Christensen, 1996). It is a formalization reducing
the complexity of the world to the capital’s dynamic of creative destruction
(Schumpeter, 2013), a variant of an old and still hegemonic productivism that
understands social development as primarily a function of radical advances in
348 | article

Marcell Mars and Tomislav Medak

Against innovation

technological productivity (Mumford, 1967). According to this view, what counts
is that spurts of technological innovation are driven by cycles of financial capital
facing slumping profits in production (Perez, 2011).
However, once the effect of gains from new technologies starts to slump, once
the technologist’s dream of improving the world hits the hard place of venture
capital monetization and capitalist competition, once the fog of hyped-up
technological boom clears, that which is supposedly left behind comes the fore.
There’s then the sunken fixed capital that is no longer productive enough.
There’s then technical infrastructures and social institutions that were there
before the innovation and still remain there once its effect tapers off, removed
from view in the productivist mindset, and yet invisibly sustaining that activity of
innovation and any other activity in the social world we inhabit (Hughes, 1993).
What remains then is the maintenance of stagnant infrastructures, the work of
repair to broken structures and of care for resources that we collectively depend
on.
As a number of scholars who have turned their attention to the matters of repair,
maintenance and care suggest, it is the sedimented material infrastructures of
the everyday and their breakdown that in fact condition and drive much of the
innovation process (Graham and Thrift, 2007; Jackson, 2014). As the renowned
historian of technology Thomas Hughes suggested (Hughes, 1993),
technological changes largely address the critical problems of existing
technologies. Earlier still, in the 1980s, David Noble convincingly argued that the
development of forces of production is a function of the class conflict (Noble,
2011). This turns the temporal logic of innovation on its head. Not the creative
destruction of a techno-optimist kind, but the malfunctioning of technological
infrastructures and the antagonisms of social structures are the elementary
pattern of learning and change in our increasingly technological world. As
Stephen Graham and Nigel Thrift argued (2007), once the smooth running
production, consumption and communication patterns in the contemporary
capitalist technosphere start to collapse, the collective coping strategies have to
rise to the challenge. Industrial disasters, breakdowns of infrastructures and
natural catastrophes have taught us that much.
In an age where a global division of labor is producing a growing precarity for
ever larger segments of the world’s working population and the planetary
systems are about to tip into non-linear changes, a truly radical gesture is that
which takes as its focus the repair of the effects of productivism. Approaching the
library and the university through the optic of social infrastructure allows us to
glimpse a radicality that their supposed inertia, complexity and stability make

article | 349



possible. This slowdown enables the processes of learning and the construction
of collective responses to the double crisis of growth and the environment.
In a social world in which precarity is differently experienced between different
groups, these institutions can accommodate that heterogeneity and diminish
their insecurities, helping the society effectively support structural change. They
are a commons in the non-substantive sense that Lauren Berlant (2016)
proposes, a ‘transitional form’ that doesn’t elide social antagonisms and that lets
different social positions loosely converge, in order to become ‘a powerful vehicle
for troubling troubled times’ (Berlant, 2016: 394-395).
The trajectory of radical gestures, discontinuities by re-invention, and creative
destruction of the old have been historically a hallmark of the avant-gardes. In
what follows, we will revisit the history of the avant-gardes, claiming that,
throughout their periodic iterations, the avant-gardes returned and mutated
always in response to the dominant processes and crises of the capitalist
development of their time. While primarily an artistic and intellectual
phenomenon, the avant-gardes emerged from both an adversarial and a coconstitutive relation to the institutions of higher education and knowledge
production. By revisiting three epochal moments along the trajectory of the
avant-gardes – 1917, 1967 and 2017 – we now wish to establish how the
structural context for radical disruption and radical transformation were
historically changing, bringing us to the present conjuncture where the library
and the university can reclaim the legacy of the avant-gardes by seemingly doing
its exact opposite: refusing innovation.

1917 – Industrial modernization,
revolutionary subjectivity

accelerated

temporality

and

In his text on ‘Modernity and Revolution’ Perry Anderson (1984) provides an
unexpected, yet the cogent explanation of the immense explosion of artistic
creativity in the short span of time between the late nineteenth and early
twentieth century that is commonly periodized as modernism (or avant-garde,
which he uses sparsely yet interchangeably). Rather than collapsing these wildly
diverging movements and geographic variations of artistic practices into a
monolithic formation, he defines modernism as a broad field of singular
responses resulting from the larger socio-political conjuncture of industrial
modernity. The very different and sometimes antithetical currents of symbolism,
constructivism, futurism, expressionism or suprematism that emerge in
modernism’s fold were defined by three coordinates: 1) an opposition to the
academicism in the art of the ancien régime, which modernist art tendencies both
350 | article

Marcell Mars and Tomislav Medak

Against innovation

draw from and position themselves against, 2) a transformative use of
technologies and means of communication that were still in their promising
infancy and not fully integrated into the exigencies of capitalist accumulation and
3) a fundamental ambivalence vis-à-vis the future social formation – capitalism or
socialism, state or soviet – that the process of modernization would eventually
lead to. As Anderson summarizes:
European modernism in the first years of this century thus flowered in the space
between a still usable classical past, a still indeterminate technical present, and a
still unpredictable political future. Or, put another way, it arose at the intersection
between a semi-aristocratic ruling order, a semi-industrialized capitalist economy,
and a semi-emergent, or -insurgent, labour movement. (Anderson, 1984: 150)

Thus these different modernisms emerged operating within the coordinates of
their historical present, – committed to a substantive subversion of tradition or to
an acceleration of social development. In his influential theory of the avant-garde,
Peter Bürger (1984) roots its development in the critique of autonomy the art
seemingly achieved with the rise of capitalist modernity between the eighteenth
and late nineteenth century. The emergence of bourgeois society allowed artists
to attain autonomy in a triple sense: art was no longer bounded to the
representational hierarchies of the feudal system; it was now produced
individually and by individual fiat of the artist; and it was produced for individual
appreciation, universally, by all members of society. Starting from the ideal of
aesthetic autonomy enshrined in the works of Kant and Schiller, art eventually
severed its links from the boundedness of social reality and made this freedom
into its subject matter. As the markets for literary and fine artworks were
emerging, artists were gaining material independence from feudal patronage, the
institutions of bourgeois art were being established, and ‘[a]estheticism had made
the distance from the praxis of life the content of works’ (Bürger, 1984: 49)
While capitalism was becoming the dominant reality, the freedom of art was
working to suppress the incursion of that reality in art. It was that distance,
between art and life, that historical avant-gardes would undertake to eliminate
when they took aim at bourgeois art. With the ‘pathos of historical
progressiveness on their side’ (Bürger, 1984: 50), the early avant-gardes were
thus out to relate and transform art and life in one go.
Early industrial capitalism unleashed an enormous social transformation
through the formalization and rationalization of processes, the coordination and
homogenization of everyday life, and the introduction of permanent innovation.
Thus emerged modern bureaucracy, mass society and technological revolutions.
Progress became the telos of social development. Productive forces and global
expansion of capitalist relations made the humanity and the world into a new

article | 351



horizon of both charitable and profitable endeavors, emancipatory and imperial.
The world became a project (Krajewski, 2014).
The avant-gardes around the turn of the 20th century integrated and critically
inflected these transformations. In the spirit of the October Revolution, its
revolutionary subjectivity approached social reality as eminently transformable.
And yet, a recurrent concern of artists was with the practical challenges and
innovations of accelerated modernization: how to control, coordinate and socially
integrate the immense expansionary forces of early industrialization. This was an
invitation to insert one’s own radical visions into life and create new forms of
standardization and rationality that would bring society out of its pre-industrial
backwardness. Central to the avant-garde was abolishing the old and creating the
new, while overcoming the separation of art and social practice. Unleashing
imaginary and constructive forces in a reality that has become rational, collective
and universal: that was its utopian promise; that was its radical innovation. Yet,
paradoxically, it is only once there is the new that the previously existing social
world can be formalized and totalized as the old and the traditional. As Boris
Groys (2014) insisted, the new can be only established once it stands in a relation
to the archive and the museum. This tendency was probably nowhere more in
evidence than, as Sven Spieker documents in his book ‘The big archive – Art
from bureaucracy’ (2008), in the obsession of Soviet constructivists and
suprematists with the archival ordering of the flood of information that the
emergent bureaucratic administration and industrial management were creating
on an unprecedented scale.
The libraries and the universities followed a similar path. As the world became a
project, the aggregation and organization of all knowledge about the world
became a new frontier. The pioneers of library science, Paul Otlet and Melvil
Dewey, consummating the work of centuries of librarianship, assembled index
card catalogs of everything and devised classificatory systems that were powerful
formalizations of the increasingly complex world. These index card catalogs were
a ‘precursor of computing: universal paper machine’, (Krajewski, 2011), predating the ‘universal Turing machine’ and its hardware implementations by
Konrad Zuse and John von Neumann by almost half a century. Knowledge thus
became universal and universalizable: while libraries were transforming into
universal information infrastructures, they were also transforming into places of
popular reading culture and popular pedagogy. Libraries thus were gaining
centrality in the dissemination of knowledge and culture, as the reading culture
was becoming a massive and general phenomenon. Moreover, during the second
part of the nineteenth and the first part of the twentieth century, the working
class would struggle to transform not only libraries, but also universities, into
public institutions providing free access to culture and really useful knowledge
352 | article

Marcell Mars and Tomislav Medak

Against innovation

necessary for the self-development and self-organization of the masses (Johnson,
2014).
While universities across the modernizing Europe, US and USSR would see their
opening to the masses only in the coming decades later, they shyly started to
welcome the working class and women. And yet, universities and schools were
intense places of experimentation and advancement. The Moscow design school
VKhUTEMAS, for instance, carried over the constructivists concerns into the
practicalities of the everyday, constructing socialist objects for a new collective
life, novyi byt, in the spirit of ‘Imagine no possessions’ (2005), as Christina Kiaer
has punned in the title of her book. But more importantly, the activities of
universities were driven by the promise that there are no limits to scientific
discovery and that a Leibnitzian dream of universal formalization of language
can be achieved through advances in mathematics and logic.

1967 – Mature capitalism, spectacle, resistant subjectivity
In this periodization, the central contention is that the radical gesture of
destruction of the old and creation of the new that was characteristic of the avantgarde has mutated as the historic coordinates of its emergence have mutated too.
Over the last century the avant-garde has divested from the radical gestures and
has assumed a relation to the transformation of social reality that is much more
complicated than its erstwhile cohort in disruptive change – technological
innovation – continues to offer. If technological modernization and the avantgarde were traveling companions at the turn of the twentieth century, after the
WWII they gradually parted their ways. While the avant-garde rather critically
inflects what capitalist modernity is doing at a particular moment of its
development, technological innovation remained in the same productivist pattern
of disruption and expansion. That technological innovation would remain
beholden to the cyclical nature of capitalist accumulation is, however, no mere
ideological blind-spot. Machinery and technology, as Karl Marx insists in The
Grundrisse, is after all ‘the most adequate form of capital’ (1857) and thus vital to
its dynamic. Hence it comes as no surprise that the trajectory of the avant-garde
is not only a continued substantive subversion of the ever new separations that
capitalist system produces in the social reality, but also a growing critical distance
to technology’s operation within its development.
Thus we skip forward half a century. The year is 1967. Industrial development is
at its apex. The despotism of mass production and its attendant consumerist
culture rules over the social landscape. After the WWII, the working class has
achieved great advances in welfare. The ‘control crisis’ (Beniger, 1989), resulting
article | 353



from an enormous expansion of production, distribution and communication in
the 19th century, and necessitating the emergence of the capacity for
coordination of complex processes in the form of modern bureaucracy and
information technology, persists. As the post-WWII golden period of gains in
productivity, prosperity and growth draws to a close, automation and
computerization start to make their way from the war room to the shop floor.
Growing labor power at home and decolonization abroad make the leading
capitalist economies increasingly struggle to keep profits rates at levels of the
previous two decades. Socialist economies struggle to overcome the initial
disadvantages of belated modernization and instill the discipline over labor in
order to compete in the dual world-system. It is still a couple of years before the
first oil crisis will break out and the neo-liberal retrenchment begin.
The revolutionary subjectivity of 1917 is now replaced by resistant militancy.
Facing the monotony of continuous-flow production and the prospect of bullshit
jobs in service industries that start to expand through the surplus of labor time
created by technological advances (Graeber, 2013), the workers perfect the
ingenuity in shirking the intensity and dullness of work. The consumerist culture
instills boredom (Vaneigem, 2012), the social division of labor produces
gendered exploitation at home (James, 2012), the paternalistic welfare provision
results in loss of autonomy (Oliver, 1990).
Sensibility is shaped by mass media whose form and content are structured by
the necessity of creating aggregate demand for the ever greater mass of
commodities and thus the commodity spectacle comes to mediate social
relations. In 1967 Guy Debord’s ‘The society of the spectacle’ is published. The
book analyses the totalizing capture of Western capitalist society by commodity
fetishism, which appears as objectively given. Commodities and their mediatized
simulacra become the unifying medium of social integration that obscures
separations within the society. So, as the crisis of 1970s approaches, the avantgarde makes its return. It operates now within the coordinates of the mature
capitalist conjuncture. Thus re-semantization, détournement and manipulation
become the representational equivalent of simulating busyness at work, playing
the game of hide-and-seek with the capitalist spectacle and turning the spectacle
onto itself. While the capitalist development avails itself of media and computers
to transform the reality into the simulated and the virtual, the avant-garde’s
subversive twist becomes to take the simulated and the virtual as reality and reappropriate them for playful transformations. Critical distance is no longer
possible under the centripetal impact of images (Foster, 1996), there’s no
revolutionary outside from which to assail the system, just one to escape from.

354 | article

Marcell Mars and Tomislav Medak

Against innovation

Thus, the exodus and autonomy from the dominant trajectory of social
development rather than the revolutionary transformation of the social totality
become the prevailing mode of emancipatory agency. Autonomy through forms
of communitarian experimentation attempts to overcome the separation of life
and work, home and workplace, reproduction and production and their
concealment in the spectacle by means of micro-political experiments.
The university – in the meanwhile transformed into an institution of mass
education, accessible to all social strata – suddenly catapults itself center-stage,
placing the entire post-WWII political edifice with its authoritarian, repressive
and neo-imperial structure into question, as students make radical demands of
solidarity and liberation. The waves of radical political movements in which
students play a central role spread across the world: the US, Czechoslovakia,
France, Western Germany, Yugoslavia, Pakistan, and so on. The institution
becomes a site from which and against which mass civil rights, anti-imperial,
anti-nuclear, environmental, feminist and various other new left movements
emerge.
It is in the context of exodus and autonomy that new formalizations and
paradigms of organizing knowledge emerge. Distributed, yet connected. Built
from bottom up, yet powerful enough to map, reduce and abstract all prior
formalizations. Take, for instance, Ted Nelson’s Project Xanadu that introduced
to the world the notion of hypertext and hyperlinking. Pre-dating the World Wide
Web by a good 25 years, Xanadu implemented the idea that a body of written
texts can be understood as a network of two-way references. With the advent of
computer networks, whose early adopters were academic communities, that
formalization materialized in real infrastructure, paving the way for a new
instantiation of the idea that the entire world of knowledge can be aggregated,
linked and made accessible to the entire world. As Fred Turner documents in
‘From counterculture to cyberculture’ (2010), the links between autonomyseeking dropouts and early cyberculture in the US were intimate.
Countercultural ideals of personal liberation at a distance from the society
converged with the developments of personal computers and computer networks
to pave the way for early Internet communities and Silicon Valley
entrepreneurialism.
No less characteristic of the period were new formalizations and paradigms of
technologically-mediated subjectivity. The tension between the virtual and the
real, autonomy and simulation of autonomy, was not only present in the avantgarde’s playful takes on mass media. By the end of the 1950s, the development of
computer hardware reached a stage where it was running fast enough to cheat
human perception in the same way moving images on film and television did. In
article | 355



the computer world, that illusion was time-sharing. Before the illusion could
work, the concept of an individual computer user had to be introduced (Hu,
2015). The mainframe computer systems such as IBM 360/370 were fast enough
to run a software-simulated (‘virtual’) clone of the system for every user (Pugh et
al., 1991). This allowed users to access the mainframe not sequentially one after
the other, but at the same time – sharing the process-cycles among themselves.
Every user was made to feel as if they were running their own separate (‘real’)
computer. The computer experience thus became personal and subjectivities
individuated. This interplay of simulation and reality became common in the late
1960s. Fifty years later this interplay would become essential for the massive
deployment of cloud computing, where all computer users leave traces of their
activity in the cloud, but only few can tell what is virtual (i.e. simulated) and what
is real (i.e. ‘bare machine’).
The libraries followed the same double trajectory of universities. In the 1960s,
the library field started to call into question the merit of objectivity and neutrality
that librarianship embraced in the 1920s with its induction into the status of
science. In the context of social upheavals of the 1960s and 1970s, librarians
started to question ‘The Myth of Library Neutrality’ (Branum, 2008). With the
transition to a knowledge economy and transformation of the information into a
commodity, librarians could no longer ignore that the neutrality had the effect of
perpetuating the implicit structural exclusions of class, gender and race and that
they were the gatekeepers of epistemic and material privilege (Jansen, 1989;
Iverson 1999). The egalitarian politics written into the de-commodification and
enabling the social mission of public libraries started to trump neutrality. Thus
libraries came to acknowledge their commitment to the marginalized, their
pedagogies and their struggles.
At the same time, library science expanded and became enmeshed with
information science. The capacity to aggregate, organize and classify huge bodies
of information, to view it as an interlinked network of references indexed in a
card catalog, sat well with the developments in the computer world. In return, the
expansion of access to knowledge that the new computer networks promised fell
in line with the promise of public libraries.

2017 – Crisis in the present, financialization, compromised subjectivity
We arrive in the present. The effects of neo-liberal restructuring, the global
division of labor and supply-chain economy are petering out. Global capitalism
struggles to maintain growth, while at the same time failing to slow down
accelerating consumption of energy and matter. It thus arrives at a double crisis
356 | article

Marcell Mars and Tomislav Medak

Against innovation

– a crisis of growth and a crisis of planetary boundaries. Against the profit
squeeze of 1970s, fixes were applied in the form of the relocation of production,
the breaking-up of organized labor and the integration of free markets across the
world. Yet those fixes have not stopped the long downturn of the capitalist system
that pinnacled in the crisis of 2008 (Brenner, 2006). Currently capital prefers to
sit on US$ 13.4 trillion of negative yielding bonds rather than risk investing into
production (Wigglesworth and Platt, 2016). Financialization is driving the efforts
to quickly boost and capture value where long-term investment makes little
sense. The finance capital privileges the short-term value maximization through
economic rents over long-term investment into growth. Its logic dominates all
aspects of the economy and the everyday (Brown, 2015). When it is betting on
long-term changes in production, capital is rather picky and chooses to bet on
technologies that are the harbingers of future automation. Those technologies
might be the death knell of the social expectation of full employment, creating a
reserve army of labor that will be pushed to various forms of casualized work,
work on demand and workfare. The brave new world of the gig-economy awaits.
The accelerated transformation of the labor market has made adaptation through
education and re-skilling difficult. Stable employment is mostly available in
sectors where highly specialized technological skills are required. Yet those
sectors need far less workers than the mass-manufacture required. Re-skilling is
only made more difficult by the fact that austerity policies are reducing the
universal provision of social support needed to allow workers to adapt to these
changes: workfare, the housing crisis, cuts in education and arts have converged
to make it so. The growing precarity of employment is doing away with the
separation between working time and free time. The temporal decomposition is
accompanied by the decomposition of workplace and living space. Fewer and
fewer jobs have a defined time and place in which they are performed (Huws,
2016) and while these processes are general, the conditions of precarity diverge
greatly from profession to profession, from individual to individual.
At the same time, we are living through record global warming, the seventh great
extinction and the destabilization of Earth’s biophysical systems. Globally, we’re
overshooting Earth’s regenerative capacities by a factor of 1.6 (Latouche, 2009),
some countries such as the US and the Gulf by a factor of 5 (Global Footprint
Network, 2013). And the environmental inequalities within countries are greater
than those between the countries (Piketty and Chancel, 2015). Unless by some
wonder almost non-existent negative emissions technologies do materialize
(Anderson and Peters, 2016), we are on a path of global destabilization of socioenvironmental metabolisms that no rate of technological change can realistically
mitigate (Loftus et al., 2015). Betting on settling on Mars is equally plausible.

article | 357



So, if the avant-garde has at the beginning of the 20th century responded to the
mutations of early modernization, in the 1960s to the integrated spectacle of the
mature capitalism, where is the avant-garde in the present?
Before we try to address the question, we need to return to our two public
institutions of mass education and research – the university and the library.
Where is their equalizing capacity in a historical conjuncture marked by the
rising levels of inequality? In the accelerating ‘race against the machine’
(Brynjolfsson and McAfee, 2012), with the advances in big data, AI and
robotization threatening to obliterate almost half of the jobs in advanced
economies (Frey and Osborne, 2013; McKinsey Global Institute, 2018), the
university is no longer able to fulfill the promise that it can provide both the
breadth and the specialization that are required to stave off the effect of a
runaway technological unemployment. It is no surprise that it can’t, because this
is ultimately a political question of changing the present direction of
technological and social development, and not a question of institutional
adaptation.
Yet while the university’s performance becomes increasingly scrutinized on the
basis of what its work is contributing to the stalling economy and challenges of
the labor market, on the inside it continues to be entrenched in defending
hierarchies. The uncertainty created by assessment-tied funding puts academics
on the defensive and wary of experimentation and resistance. Imperatives of
obsessive administrative reporting, performance metrics and short-term
competition for grant-based funding have, in Stefan Collini’s words, led to a ‘a
cumulative reduction in the autonomy, status and influence of academics’, where
‘[s]ystemic underfunding plus competition and punitive performancemanagement is seen as lean efficiency and proper accountability’ (Collini, 2017:
ch.2). Assessment-tied activities produce a false semblance of academic progress
by creating impact indicators that are frequently incidental to the research, while
at the same time demanding enormous amount of wasted effort that goes into
unsuccessful application proposals (Collini, 2017). Rankings based on
comparative performance metrics then allow university managers in the
monetized higher education systems such as UK to pitch to prospective students
how best to invest the debt they will incur in the future, in order to pay for the
growing tuition fees and cost of study, making the prospect of higher education
altogether less plausible for the majority in the long run (Bailey and Freedman,
2011).
Given that universities are not able to easily provide evidence that they are
contributing to the stalling economy, they are asked by the funders to innovate
instead. To paraphrase Marx, ‘innovate innovate that is their Moses and the
358 | article

Marcell Mars and Tomislav Medak

Against innovation

prophets’. Innovation, a popular catch-all word with the government and
institutional administrators, gleaned from the entrepreneurial language of
techno-capitalism, to denote interventions, measures and adaptations in the
functioning of all kind of processes that promise to bring disruptive, almost
punitive radical changes to the failures to respond to the disruptive challenges
unleashed by that very same techno-capitalism.
For instance, higher education policy makers such as former UK universities
minister David Willets, advocate that the universities themselves should use their
competitive advantage, embrace the entrepreneurial opportunity in the global
academic marketplace and transform themselves into startups. Universities have
to become the ‘equivalent of higher education Google or Amazon’ (Gill, 2015). As
Gary Hall reports in his ‘Uberfication of the university’ (2016), a survey UK vicechancellors has detected a number of areas where universities under their
command should become more disruptively innovative:
Among them are “uses of student data analytics for personalized services” (the
number one innovation priority for 90 percent of vice-chancellors); “uses of
technology to transform learning experiences” (massive open online courses
[MOOCs]; mobile virtual learning environments [VLEs]; “anytime-anywhere
learning” (leading to the demise of lectures and timetables); and “student-driven
flexible study modes” (“multiple entry points” into programs, bringing about an
end to the traditional academic year). (Hall, 2016: n.p.)

Universities in the UK are thus pushed to constantly create trendy programs,
‘publish or perish’, perform and assess, hire and fire, find new sources of
funders, find students, find interest of parents, vie for public attention, produce
evidence of immediate impact. All we can expect from such attempts to
transform universities into Googles and Amazons, is that we will end up with an
oligopoly of a few prestige brands franchised all around the world – if the
strategy proves ‘successful’, or – if not – just with a world in which universities
go on faking disruptive innovations while waiting for some miracle to happen
and redeem them in the eyes of neoliberal policy makers.
These are all short-term strategies modeled on the quick extraction of value that
Wendy Brown calls the ‘financialization of everything’ (Brown, 2015: 70).
However, the best in the game of such quick rent-seeking are, as always, those
universities that carry the most prestige, have the most assets and need to be
least afraid for their future, whereas the rest are simply struggling in the prospect
of reduced funding.
Those universities in ‘peripheral’ countries, which rarely show up anywhere near
the top of the global rankings, are in a particularly disadvantaged situation. As
Danijela Dolenec has calculated:
article | 359



[T]he whole region [of Western Balkans] invests approximately EUR 495 million in
research and development per year, which is equivalent of one (second-largest) US
university. Current levels of investment cannot have a meaningful impact on the
current model of economic development ... (Dolenec, 2016: 34)

So, these universities don’t have much capacity to capture value in the global
marketplace. In fact, their work in educating masses matters less to their
economies, as these economies are largely based on selling cheap low-skilled
labor. So, their public funders leave them in their underfunded torpor to
improvise their way through education and research processes. It is these
institutions that depend the most on the Library Genesis and Science Hubs of
this world. If we look at the download data of Library Genesis, as has Balasz Bodó
(2015), we can discern a clear pattern that the users in the rich economies use
these shadow libraries to find publications that are not available in the digital
form or are pay-walled, while the users in the developing economies use them to
find publications they don’t have access to in print to start with.
As for libraries, in the shift to the digital they were denied the right to provide
access that has now radically expanded (Sullivan, 2012), so they are losing their
central position in the dissemination and access to knowledge. The decades of
retrenchment in social security, unemployment support, social housing, arts and
education have made libraries, with their resources open to broad communities,
into a stand-in for failing welfare institutions (Mattern, 2014). But with the onset
of 2008 crisis, libraries have been subjected to brutal cuts, affecting their ability
to stay open, service their communities and in particular the marginalized
groups and children (Kean, 2017). Just as universities, libraries have thus seen
their capacity to address structural exclusions of marginalized groups and
provide support to those affected by precarity compromised.
Libraries thus find themselves struggling to provide legitimation for the support
they receive. So they re-invent and re-brand themselves as ‘third places’ of
socialization for the elderly and the youth (Engel-Johnson, 2017), spaces where
the unemployed can find assistance with their job applications and the socially
marginalized a public location with no economic pressures. All these functions,
however, are not something that public libraries didn’t do before, along with
what was their primary function – providing universal access to all written
knowledge, in which they are however nowadays – in the digital economy –
severely limited.
All that innovation that universities and libraries are undertaking seems to be
little innovation at all. It is rather a game of hide and seek, behind which these
institutions are struggling to maintain their substantive mission and operation.
So, what are we to make of this position of compromised institutional agency? In
360 | article

Marcell Mars and Tomislav Medak

Against innovation

a situation where progressive social agency no longer seems to be within the
remit of these institutions? The fact is that with the growing crisis of precarity
and social reproduction, where fewer and fewer have time from casualized work
to study, convenience to do so at home and financial prospects to incur a debt by
enrolling in a university, these institutions should, could and sometimes do
provide sustaining social arrangements and resources – not only to academics,
students and patrons, but also to a general public – that can reduce economic
imperatives and diminish insecurities. While doing this they also create
institutional preconditions that, unlike business-cycle driven institutions, can
support the structural repair that the present double crisis demands.
If the historical avant-garde was birthing of the new, nowadays repeating its
radicalism would seem to imply cutting through the fog of innovation. Its
radicalism would be to inhabit the non-new. The non-new that persists and in the
background sustains the broken social and technological world that the technocapitalist innovation wants to disrupt and transcend. Bullshit jobs and simulating
busyness at work are correlative of the fact that free time and the abundance of
social wealth created by growing productivity have paradoxically resulted in
underemployment and inequality. We’re at a juncture: accelerated crisis of
capitalism, accelerated climate change, accelerated erosion of political systems
are trajectories that leave little space for repair. The full surrender of
technological development into the hands of the market forces leaves even less.
The avant-garde radicalism nowadays is standing with the social institutions that
permit, speaking with Lauren Berlant, the ‘loose convergence’ of social
heterogeneity needed to construct ‘transitional form[s]’ (2016: 394). Unlike the
solutionism of techno-communities (Morozov, 2013) that tend to reduce
uncertainty of situations and conflict of values, social institutions permit
negotiating conflict and complexity in the situations of crisis that Gary Ravetz
calls postnormal – situations ‘where facts are uncertain, values in dispute, stakes
high and decisions urgent’ (Ravetz, 2003: 75). On that view, libraries and
universities as social infrastructures, provide a chance for retardation and
slowdown, and a capacity for collective disobedience. Against the radicalizing
exclusions of property and labor market, they can lower insecurities and
disobediently demand universal access to knowledge and education, a mass
intellectuality and autonomous critical pedagogy that increasingly seems a thing
of the past. Against the imposition to translate quality into metrics and capture
short-term values through assessment, they can resist the game of simulation.
While the playful simulation of reality was a thing in 1967, in 2017 it is no
longer. Libraries and universities can stop faking ‘innovativity’, ‘efficiency’ and
‘utility’.

article | 361



Custodians.online, the second letter
On 30 November, 2016 a second missive was published by Custodians.online
(2016). On the twentieth anniversary of UbuWeb, ‘the single-most important
archive of avant-garde and outsider art’ on the Internet, the drafters of the letter
followed up on their initial call to acts of care for the infrastructure of our shared
knowledge commons that the first letter ended with. The second letter was a gift
card to Ubu, announcing that it had received two mirrors, i.e. exact copies of the
Ubu website accessible from servers in two different locations – one in Iceland,
supported by a cultural activist community, and another one in Switzerland,
supported by a major art school – whose maintenance should ensure that Ubu
remains accessible even if its primary server is taken down.
McKenzie Wark in their text on UbuWeb poignantly observes that shadow
libraries are:
tactics for intervening in three kinds of practices, those of the art-world, of
publishing and of scholarship. They respond to the current institutional, technical
and political-economic constraints of all three. As it says in the Communist
Manifesto, the forces for social change are those that ask the property question.
While détournement was a sufficient answer to that question in the era of the
culture industries, they try to formulate, in their modest way, a suitable tactic for
answering the property question in the era of the vulture industries. (Wark, 2015:
116)

As we claimed, the avant-garde radicalism can be recuperated for the present
through the gestures of disobedience, deceleration and demands for
inclusiveness. Ubu already hints toward such recuperation on three coordinates:
1) practiced opposition to the regime of intellectual property, 2) transformative
use of old technologies, and 3) a promise of universal access to knowledge and
education, helping to foster mass intellectuality and critical pedagogy.
The first Custodians.online letter was drafted to voice the need for a collective
disobedience. Standing up openly in public for the illegal acts of piracy, which
are, however, made legitimate by the fact that students, academics and
researchers across the world massively contribute and resort to pirate repositories
of scholarly texts, holds the potential to overturn the noxious pattern of court
cases that have consistently lead to such resources being shut down.
However, the acts of disobedience need not be made explicit in the language of
radicalism. For a public institution, disobedience can also be doing what should
not be done: long-term commitment to maintenance – for instance, of a mirror –
while dealing institutionally with all the conflicts and challenges that doing this
publicly entails.
362 | article

Marcell Mars and Tomislav Medak

Against innovation

The second Custodians.online letter was drafted to suggest that opportunity:
In a world of money-crazed start-ups and surveillance capitalism, copyright
madness and abuse, Ubu represents an island of culture. It shows what a single
person, with dedication and focus, can achieve. There are lessons to be drawn
from this:

1) Keep it simple and avoid constant technology updates. Ubu is plain
HTML, written in a text-editor.
2) Even a website should function offline. One should be able to take the
hard disk and run. Avoid the cloud – computers of people you don’t
know and who don’t care about you.
3) Don’t ask for permission. You would have to wait forever, turning
yourself into an accountant and a lawyer.
4) Don’t promise anything. Do it the way you like it.
5) You don’t need search engines. Rely on word-of-mouth and direct
linking to slowly build your public. You don’t need complicated
protocols, digital currencies or other proxies. You need people who
care.
6) Everything is temporary, even after 20 years. Servers crash, disks die,
life changes and shit happens. Care and redundancy is the only path to
longevity. Care and redundancy is the reason why we decided to run
mirrors. We care and we want this resource to exist… should shit
happen, this multiplicity of locations and institutions might come in
handy. We will see. Find your Ubu. It’s time to mirror each other in
solidarity. (Custodians.online, 2016)

references
Anderson, K. and G. Peters (2016) ‘The trouble with negative emissions’, Science,
354 (6309): 182– 183.
Anderson, P. (1984) ‘Modernity and revolution’, New Left Review, (144): 96– 113.
Bailey, M. and D. Freedman (2011) The assault on universities: A manifesto for
resistance. London: Pluto Press.
Baty, P. (2017) ‘These maps could change how we understand the role of the
world’s top universities’, Times Higher Education online, May 27. [https://www

article | 363



.timeshighereducation.com/blog/these-maps-could-change-how-we-understa
nd-role-worlds-top-universities]
Beniger, J. (1989) The control revolution: Technological and economic origins of the
information society. Cambridge: Harvard University Press.
Berlant, L. (2016) ‘The commons: Infrastructures for troubling times’,
Environment and Planning D: Society and Space, 34 (3): 393– 419.
Bodó, B. (2015) ‘Libraries in the post-scarcity era’, in H. Porsdam (ed.)
Copyrighting creativity: Creative values, cultural heritage institutions and systems
of intellectual property. London: Routledge.
Bower, J.L. and C.M. Christensen. (1996) ‘Disruptive technologies: Catching the
wave’, The Journal of Product Innovation Management, 1(13): 75– 76.
Branum, C. (2008) ‘The myth of library neutrality’. [https://candisebranum.word
press.com 2014/05/15/the-myth-of-library-neutrality/]
Brenner, R. (2006) The economics of global turbulence: The advanced capitalist
economies from long boom to long downturn, 1945-2005. London: Verso.
Brown, W. (2015) Undoing the demos: Neoliberalism’s stealth revolution. Cambridge:
MIT Press.
Brynjolfsson, E. and A. McAfee (2012) Race against the machine: How the digital
revolution is accelerating innovation, driving productivity, and irreversibly
transforming employment and the economy. Lexington: Digital Frontier Press.
Bürger, P. (1984) Theory of the avant-garde. Manchester: Manchester University
Press.
Collini, S. (2017) Speaking of universities. London: Verso.
Critchley, S. (2007) Infinitely demanding: Ethics of commitment, politics of
resistance. London: Verso.
Custodians.online (2015) ‘In solidarity with Library Genesis and Sci-Hub’. [http:/
/custodians.online]
Custodians.online
.online/ubu]

(2016)

‘Happy

birthday,

Ubu.com’.

[http://custodians

Dolenec, D. (2016) ‘The implausible knowledge triangle of the Western Balkans’,
in S. Gupta, J. Habjan and H. Tutek (eds.) Academic labour, unemployment and
global higher education: Neoliberal policies of funding and management. New
York: Springer.

364 | article

Marcell Mars and Tomislav Medak

Against innovation

Engel-Johnson, E. (2017) ‘Reimagining the library as an anti-café’, Discover
Society. [http://discoversociety.org/2017/04/05/reimagining-the-library-as-ananti-cafe/]
Foster, H. (1996) The return of the real: The avant-garde at the end of the century.
Cambridge: MIT Press.
Frey, C.B. and M. Osborne, (2013) The future of employment: How susceptible are
jobs
to
computerisation?
Oxford:
Oxford
Martin
School.
[http://www.oxfordmartin.ox.ac.uk/publications/view/1314]
Gill, J. (2015) ‘Losing Our Place in the Vanguard?’ Times Higher Education, 18
.
June. [https://www.timeshighereducation.com/opinion/losing-our-place-van
guard]
Global Footprint Network (2013) Ecological wealth of nations. [http://www.
footprintnetwork.org/ecological_footprint_nations/]
Graeber, D. (2013) ‘On the phenomenon of bullshit jobs’, STRIKE! Magazine.
[https://strikemag.org/bullshit-jobs/]
Graham, S. and N. Thrift, (2007) ‘Out of order: Understanding repair and
maintenance’, Theory, Culture & Society, 24(3): 1– 25.
Groys, B. (2014) On the new. London:Verso.
Hall, G. (2016) The uberfication of the university. Minneapolis: Minnesota
University Press.
Harris, M.H. (1999) History of libraries of the western world. London: Scarecrow
Press.
Henning, B. (2017) ‘Unequal elite: World university rankings 2016/17’, Views of
the World. [http://www.viewsoftheworld.net/?p=5423]
Hu, T.-H. (2015) A prehistory of the cloud. Cambridge: MIT Press.
Hughes, T.P. (1993) Networks of power: Electrification in Western society, 1880-1930.
Baltimore: JHU Press.
Huws, U. (2016) ‘Logged labour: A new paradigm of work organisation?’, Work
Organisation, Labour and Globalisation, 10(1): 7– 26.
Iverson, S. (1999) ‘Librarianship and resistance’, Progressive Librarian, 15, 14-19.
Jackson, S.J. (2014) ‘Rethinking repair’, in T. Gillespie, P.J. Boczkowski and K.A.
Foot (eds.), Media technologies: Essays on communication, materiality, and
society. Cambridge: MIT Press.
James, S. (2012) ‘A woman’s place’, in S. James, Sex, race and class, the perspective
of winning: A selection of writings 1952-2011. Oakland: PM Press.
article | 365



Jansen, S.C. (1989) ‘Gender and the information society: A socially structured
silence’. Journal of Communication, 39(3): 196– 215.
Johnson, R. (2014) ‘Really useful knowledge’, in A. Gray, J. Campbell, M.
Erickson, S. Hanson and H. Wood (eds.) CCCS selected working papers: Volume
1. London: Routledge.
Kean, D. (2017) ‘Library cuts harm young people’s mental health services, warns
lobby’. The Guardian, 13 January. [http://www.theguardian.com/books
/2017/jan/13/library-cuts-harm-young-peoples-mental-health-services-warnslobby]
Kiaer, C. (2005) Imagine no possessions: The socialist objects of Russian
Constructivism. Cambridge: MIT Press.
Krajewski, M. (2014) World projects: Global information before World War I.
Minneapolis: University of Minnesota Press.
Krajewski, M. (2011) Paper machines: About cards & catalogs, 1548-1929.
Cambridge: MIT Press.
Larivière, V., S. Haustein and P. Mongeon (2015) ‘The oligopoly of academic
publishers in the digital era’. PLoS ONE 10(6). [http://dx.doi.org/
10.1371/journal.pone.0127502]
Latouche, S. (2009) Farewell to growth. Cambridge: Polity Press.
Liang, L. (2012) ‘Shadow Libraries’. e-flux (37). [http://www.e-flux.com/journal
/37/61228/shadow-libraries/]
Loftus, P.J., A.M. Cohen, J.C.S. Long and J.D. Jenkins (2015) ‘A critical review of
global decarbonization scenarios: What do they tell us about feasibility?’ Wiley
Interdisciplinary Reviews: Climate Change, 6(1): 93– 112.
Marx, K. (1973 [1857]) The grundrisse. London: Penguin Books in association with
New Left Review. [https://www.marxists.org/archive/marx/works/1857/gru
ndrisse/ch13.htm]
Mattern, S. (2014) ‘Library as infrastructure’, Places Journal. [https://placesjo
urnal.org/article/library-as-infrastructure/]
McKinsey Global Institute (2018) Harnessing automation for a future that works.
[https://www.mckinsey.com/global-themes/digital-disruption/harnessing-au
tomation-for-a-future-that-works]
Morozov, E. (2013) To save everything, click here: The folly of technological
solutionism. New York: PublicAffairs.
Mumford, L. (1967) The myth of the machine: Technics and human development.
New York: Harcourt Brace Jovanovich.
366 | article

Marcell Mars and Tomislav Medak

Against innovation

Noble, D.F. (2011) Forces of production. New Brunswick: Transaction Publishers.
Oliver, M. (1990) The politics of disablement. London: Macmillan Education.
Perez, C. (2011) ‘Finance and technical change: A long-term view’, African
Journal of Science, Technology, Innovation and Development, 3(1): 10-35.
Piketty, T. and L. Chancel (2015) Carbon and inequality: From Kyoto to Paris –
Trends in the global inequality of carbon emissions (1998-2013) and prospects for
an equitable adaptation fund. Paris: Paris School of Economics.
[http://piketty.pse.ens.fr/files/ChancelPiketty2015.pdf]
Pugh, E.W., L.R. Johnson and J.H. Palmer (1991) IBM’s 360 and early 370 systems.
Cambridge: MIT Press.
Ravetz, J. (2003) ‘Models as metaphor’, in B. Kasemir, J. Jaeger, C.C. Jaeger and
M.T. Gardner (eds.) Public participation in sustainability science: A handbook,
Cambridge: Cambridge University Press.
Sample, I. (2012) ‘Harvard university says it can’t afford journal publishers’
prices’, The Guardian, 24 April. [https://www.theguardian.com/science
/2012/apr/24/harvard-univer sity-journal-publishers-prices]
Schumpeter, J.A. (2013 [1942]) Capitalism, socialism and democracy. London:
Routledge.
Spieker, S. (2008) The big archive: Art from bureaucracy. Cambridge: MIT Press.
Sullivan, M (2012) ‘An open letter to America’s publishers from ALA President
Maureen Sullivan’, American Library Association, 28 September.
[http://www.ala.org/news/2012/09/open-letter-america%E2%80%99s-publ
ishers-ala-president-maureen-sullivan].
‘The Cost of Knowledge’ (2012). [http://thecostofknowledge.com/]
Times
Higher
Education
(2017)
‘World
university
rankings’.
[https://www.timeshighereducation.com/world-university-rankings/2018/
world-ranking]
Turner, F. (2010) From counterculture to cyberculture: Stewart Brand, the Whole
Earth Network, and the rise of digital utopianism. Chicago: University of Chicago
Press.
Vaneigem, R. (2012) The revolution of everyday life. Oakland: PM Press.
Wark, M. (2015) ‘Metadata punk’, in T. Medak and M. Mars (eds.) Public library.
Zagreb: What, How; for Whom/WHW & Multimedia Institute.

article | 367



Wigglesworth, R. and E. Platt (2016) ‘Value of negative-yielding bonds hits
$13.4tn’, Financial Times, 12 August. [http://www.ft.com/cms/s/0/973b606060ce-11e6-ae3f-77baadeb1c93.html]

the authors
Marcell Mars is a research associate at the Centre for Postdigital Cultures at Coventry
University (UK). Mars is one of the founders of Multimedia Institute/MAMA in Zagreb.
His research ‘Ruling Class Studies’, started at the Jan van Eyck Academy (2011),
examines state-of-the-art digital innovation, adaptation, and intelligence created by
corporations such as Google, Amazon, Facebook, and eBay. He is a doctoral student at
Digital Cultures Research Lab at Leuphana University, writing a thesis on ‘Foreshadowed
Libraries’. Together with Tomislav Medak he founded Memory of the World/Public
Library, for which he develops and maintains software infrastructure.
Email: ki.be@rkom.uni.st
Tomislav Medak is a doctoral student at the Centre for Postdigital Cultures at Coventry
University. Medak is a member of the theory and publishing team of the Multimedia
Institute/MAMA in Zagreb, as well as an amateur librarian for the Memory of the
World/Public Library project. His research focuses on technologies, capitalist
development, and postcapitalist transition, particularly on economies of intellectual
property and unevenness of technoscience. He authored two short volumes: ‘The Hard
Matter of Abstraction—A Guidebook to Domination by Abstraction’ and ‘Shit Tech for A
Shitty World’. Together with Marcell Mars he co-edited ‘Public Library’ and ‘Guerrilla
Open Access’.
Email: tom@mi2.hr



tactics in Mars & Medak 2019


Mars & Medak
System of a Takedown
2019


System of a Takedown: Control and De-­commodification in the Circuits of Academic Publishing
Marcell Mars and Tomislav Medak

Since 2012 the Public Library/Memory of the World1 project has
been developing and publicly supporting scenarios for massive
disobedience against the current regulation of production and
circulation of knowledge and culture in the digital realm. While
the significance of that year may not be immediately apparent to
everyone, across the peripheries of an unevenly developed world
of higher education and research it produced a resonating void.
The takedown of the book-­sharing site Library.nu in early 2012
gave rise to an anxiety that the equalizing effect that its piracy
had created—­the fact that access to the most recent and relevant
scholarship was no longer a privilege of rich academic institutions
in a few countries of the world (or, for that matter, the exclusive
preserve of academia to begin with)—­would simply disappear into
thin air. While alternatives within these peripheries quickly filled
the gap, it was only through an unlikely set of circumstances that
they were able to do so, let alone continue to exist in light of the
legal persecution they now also face.

48

The starting point for the Public Library/Memory of the World
project was a simple consideration: the public library is the institutional form that societies have devised in order to make knowledge
and culture accessible to all their members regardless of social or
economic status. There’s a political consensus that this principle of
access is fundamental to the purpose of a modern society. Yet, as
digital networks have radically expanded the access to literature
and scientific research, public libraries were largely denied the
ability to extend to digital “objects” the kind of de-­commodified
access they provide in the world of print. For instance, libraries
frequently don’t have the right to purchase e-­books for lending and
preservation. If they do, they are limited by how many times—­
twenty-­six in the case of one publisher—­and under what conditions
they can lend them before not only the license but the “object”
itself is revoked. In the case of academic journals, it is even worse:
as they move to predominantly digital models of distribution,
libraries can provide access to and “preserve” them only for as
long as they pay extortionate prices for ongoing subscriptions. By
building tools for organizing and sharing electronic libraries, creating digitization workflows, and making books available online, the
Public Library/Memory of the World project is aimed at helping to
fill the space that remains denied to real-­world public libraries. It is
obviously not alone in this effort. There are many other platforms,
some more public, some more secretive, working to help people
share books. And the practice of sharing is massive.
—­https://www.memoryoftheworld.org

Capitalism and Schizophrenia
New media remediate old media. Media pay homage to their
(mediatic) predecessors, which themselves pay homage to their
own (mediatic) predecessors. Computer graphics remediate film,
which remediates photography, which remediates painting, and so
on (McLuhan 1965, 8; Bolter and Grusin 1999). Attempts to understand new media technologies always settle on a set of metaphors

(of the old and familiar), in order to approximate what is similar,
and yet at the same time name the new. Every such metaphor has
its semiotic distance, decay, or inverse-­square law that draws the
limit how far the metaphor can go in its explanation of the phenomenon to which it is applied. The intellectual work in the Age of
Mechanical Reproduction thus received an unfortunate metaphor:
intellectual property. A metaphor modeled on the scarce and
exclusive character of property over land. As the Age of Mechanical
Reproduction became more and more the Age of Discrete and
Digital Reproduction, another metaphor emerged, one that reveals
the quandary left after decades of decay resulting from the increasing distanciation of intellectual property from the intellectual work
it seeks to regulate, and that metaphor is: schizophrenia.
Technologies compete with each other—­the discrete and the
digital thus competes with the mechanical—­and the aftermath of
these clashes can be dramatic. People lose their jobs, companies
go bankrupt, disciplines lose their departments, and computer
users lose their old files. More often than not, clashes between
competing technologies create antagonisms between different
social groups. Their voices are (sometimes) heard, and society tries
to balance their interests.
If the institutional remedies cannot resolve the social antagonism,
the law is called on to mediate. Yet in the present, the legal system
only reproduces the schizoid impasse where the metaphor of property over land is applied to works of intellect that have in practical
terms become universally accessible in the digital world. Court
cases do not result in a restoration of balance but rather in the
confirmation of entrenched interests. It is, however, not necessary
that courts act in such a one-­sided manner. As Cornelia Vismann
(2011) reminds us in her analysis of the ancient roots of legal mediation, the juridical process has two facets: first, a theatrical aspect
that has common roots with the Greek dramatic theatre and its
social function as a translator of a matter of conflict into a case for
weighted juridical debate; second, an agonistic aspect not unlike a
sporting competition where a winner has to be decided, one that

49

50

leads to judgment and sanction. In the matter of copyright versus
access, however, the fact that courts cannot look past the metaphor of intellectual property, which reduces any understanding of
our contemporary technosocial condition to an analogy with the
scarcity-­based language of property over land, has meant that they
have failed to adjudicate a matter of conflict between the equalizing effects of universal access to knowledge and the guarantees of
rightful remuneration for intellectual labor into a meaningful social
resolution. Rather they have primarily reasserted the agonistic
aspect by supporting exclusively the commercial interests of large
copyright industries that structure and deepen that conflict at the
societal level.
This is not surprising. As many other elements of contemporary
law, the legal norms of copyright were articulated and codified
through the centuries-­long development of the capitalist state
and world-system. The legal system is, as Nicos Poulantzas (2008,
25–­26) suggests, genetically structured by capitalist development.
And yet at the same time it is semi-­autonomous; the development
of its norms and institutional aspects is largely endogenous and
partly responsive to the specific needs of other social subsystems.
Still, if the law and the courts are the codified and lived rationality
of a social formation, then the choice of intellectual property as a
metaphor in capitalist society comes as no surprise, as its principal
objective is to institute a formal political-­economic framework for
the commodification of intellectual labor that produces knowledge
and culture. There can be no balance, only subsumption and
accumulation. Capitalism and schizophrenia.
Schizophrenia abounds wherever the discrete and the digital
breaking barriers to access meets capitalism. One can only wonder
how the conflicting interests of different divisions get disputed
and negotiated in successful corporate giants like Sony Group
where Sony Pictures Entertainment,2 Sony Music Entertainment3
and Sony Computer Entertainment coexist under the same roof
with the Sony Electronics division, which invented the Walkman
back in 1979 and went on to manufacture devices and gadgets like

home (and professional) audio and video players/recorders (VHS,
Betamax, TV, HiFi, cassette, CD/DVD, mp3, mobile phones, etc.),
storage devices, personal computers, and game consoles. In the
famous 1984 Betamax case (“Sony Corp. of America v. Universal
City Studios, Inc.,” Wikipedia 2015), Universal Studios and the Walt
Disney Company sued Sony for aiding copyright infringement with
their Betamax video recorders. Sony won. The court decision in
favor of fair use rather than copyright infringement laid the legal
ground for home recording technology as the foundation of future
analog, and subsequently digital, content sharing.
Five years later, Sony bought its first major Hollywood studio:
Columbia Pictures. In 2004 Sony Music Entertainment merged with
Bertelsmann Music Group to create Sony BMG. However, things
changed as Sony became the content producer and we entered the
age of the discrete and the digital. Another five years later, in 2009,
Sony BMG sued Joel Tenenbaum for downloading and then sharing
thirty-­one songs. The jury awarded US$675,000 to the music
companies (US$22,000 per song). This is known as “the second
file-­sharing case.” “The first file-­sharing case” was 2007’s Capitol Records, Inc. v. Thomas-­Rasset, which concerned the downloading of
twenty-­four songs. In the second file-­sharing case, the jury awarded
music companies US$1,920,000 in statutory damages (US$80,000
per song). The defendant, Jammie Thomas, was a Native American
mother of four from Brainerd, Minnesota, who worked at the time
as a natural resources coordinator for the Mille Lacs Band of the
Native American Ojibwe people. The conflict between access and
copyright took a clear social relief.
Encouraged by the court decisions in the years that followed, the
movie and music industries have started to publicly claim staggering numbers in annual losses: US$58 billion and 370,000 lost jobs
in the United States alone. The purported losses in sales were,
however, at least seven times bigger than the actual losses and,
if the jobs figures had been true, after only one year there would
have been no one left working in the content industry (Reid 2012).
Capitalism and schizophrenia.

51

52

If there is a reason to make an exception from the landed logic of
property being imposed onto the world of the intellect, a reason
to which few would object, it would be for access for educational
purposes. Universities in particular give an institutional form to
the premise that equal access to knowledge is a prerequisite for
building a society where all people are equal.
In this noble endeavor to make universal access to knowledge
central to social development, some universities stand out more
than the others. Consider, for example, the Massachusetts Institute
of Technology (MIT). The Free Culture and Open Access movements
have never hidden their origins, inspiration, and model in the
success of the Free Software Movement, which was founded in
1984 by Richard Stallman while he was working at the MIT Artificial
Intelligence lab. It was at the MIT Museum that the “Hall of Hacks”
was set up to proudly display the roots of hacking culture. Hacking
culture at MIT takes many shapes and forms. MIT hackers famously
put a fire truck (2006) and a campus police car (1994) onto the
roof of the Great Dome of the campus’s Building 10; they landed
(and then exploded) a weather balloon onto the pitch of Harvard
Stadium during a Harvard–­Yale football game; turned the quote
that “getting an education from MIT is like taking a drink from a Fire
Hose” into a literal fire hydrant serving as a drinking fountain in
front of the largest lecture hall on campus; and many, many other
“hacks” (Peterson 2011).
The World Wide Web Consortium was founded at MIT in 1993.
Presently its mission states as its goal “to enable human communication, commerce, and opportunities to share knowledge,”
on the principles of “Web for All” and the corresponding, more
technologically focused “Web on Everything.” Similarly, MIT began
its OpenCourseWare project in 2002 in order “to publish all of
[MIT’s] course materials online and make them widely available to
everyone” (n.d.). The One Laptop Per Child project was created in
2005 in order to help children “learn, share, create, and collaborate” (2010). Recently the MIT Media Lab (2017) has even started its
own Disobedience Award, which “will go to a living person or group

engaged in what we believe is extraordinary disobedience for
the benefit of society . . . seeking both expected and unexpected
nominees.” When it comes to the governance of access to MIT’s
own resources, it is well known that anyone who is registered and
connected to the “open campus” wireless network, either by being
physically present or via VPN, can search JSTOR, Google Scholar,
and other databases in order to access otherwise paywalled journals from major publishers such as Reed Elsevier, Wiley-­Blackwell,
Springer, Taylor and Francis, or Sage.
The MIT Press has also published numerous books that we love
and without which we would have never developed the Public
Library/Memory of the World project to the stage where it is now.
For instance, only after reading Markus Krajewski’s Paper Machines: About Cards & Catalogs, 1548–­1929 (2011) and learning how
conceptually close librarians came to the universal Turing machine
with the invention of the index card catalog did we center the
Public Library/Memory of the World around the idea of the catalog.
Eric von Hippel’s Democratizing Innovation (2005) taught us how end
users could become empowered to innovate and accordingly we
have built our public library as a distributed network of amateur
librarians acting as peers sharing their catalogs and books. Sven
Spieker’s The Big Archive: Art from Bureaucracy (2008) showed us the
exciting hybrid meta-­space between psychoanalysis, media theory,
and conceptual art one could encounter by visiting the world of
catalogs and archives. Understanding capitalism and schizophrenia would have been hard without Semiotext(e)’s translations of
Deleuze and Guattari, and remaining on the utopian path would
have been impossible if not for our reading of Cybernetic Revolutionaries (Medina 2011), Imagine No Possessions (Kiaer 2005), or Art
Power (Groys 2008).

Our Road into Schizophrenia, Commodity
Paradox, Political Strategy
Our vision for the Public Library/Memory of the World resonated
with many people. After the project initially gained a large number

53

54

of users, and was presented in numerous prominent artistic
venues such as Museum Reina Sofía, Transmediale, Württembergischer Kunstverein, Calvert22, 98weeks, and many more, it was no
small honor when Eric Kluitenberg and David Garcia invited us to
write about the project for an anthology on tactical media that was
to be published by the MIT Press. Tactical media is exactly where
we would situate ourselves on the map. Building on Michel de
Certeau’s concept of tactics as agency of the weak operating in the
terrain of strategic power, the tactical media (Tactical Media Files
2017) emerged in the political and technological conjuncture of the
1990s. Falling into the “art-­into-­life” lineage of historic avant-­gardes,
Situationism, DIY culture, techno-­hippiedom, and media piracy, it
constituted a heterogeneous field of practices and a manifestly
international movement that combined experimental media and
political activism into interventions that contested the post–­Cold
War world of global capitalism and preemptive warfare on a hybrid
terrain of media, institutions, and mass movements. Practices of
tactical media ranged from ephemeral media pranks, hoaxes, and
hacktivism to reappropriations of media apparatuses, institutional
settings, and political venues. We see our work as following in
that lineage of recuperation of the means of communication from
their capture by personal and impersonal structures of political or
economic power.
Yet the contract for our contribution that the MIT Press sent us in
early 2015 was an instant reminder of the current state of affairs
in academic publishing: in return for our contribution and transfer
of our copyrights, we would receive no compensation: no right to
wage and no right to further distribute our work.
Only weeks later our work would land us fully into schizophrenia:
the Public Library/Memory of the World received two takedown
notices from the MIT Press for books that could be found in its
back then relatively small yet easily discoverable online collection
located at https://library.memoryoftheworld.org, including a notice
for one of the books that had served as an inspiration to us: Art
Power. First, no wage and, now, no access. A true paradox of the

present-­day system of knowledge production: products of our
labor are commodities, yet the labor-­power producing them is
denied the same status. While the project’s vision resonates with
many, including the MIT Press, it has to be shut down. Capitalism
and schizophrenia.4
Or, maybe, not. Maybe we don’t have to go down that impasse.
Starting from the two structural circumstances imposed on us by
the MIT Press—­the denial of wage and the denial of access—­we
can begin to analyze why copyright infringement is not merely, as
the industry and the courts would have it, a matter of illegality. But
rather a matter of legitimate action.
Over the past three decades a deep transformation, induced by
the factors of technological change and economic restructuring,
has been unfolding at different scales, changing the way works
of culture and knowledge are produced and distributed across
an unevenly developed world. As new technologies are adopted,
generalized, and adapted to the realities of the accumulation
process—­a process we could see unfolding with the commodification of the internet over the past fifteen years—­the core and
the periphery adopt different strategies of opposition to the
inequalities and exclusions these technologies start to reproduce.
The core, with its emancipatory and countercultural narratives,
pursues strategies that develop legal, economic, or technological
alternatives. However, these strategies frequently fail to secure
broader transformative effects as the competitive forces of the
market appropriate, marginalize, or make obsolete the alternatives
they advocate. Such seems to have been the destiny of much of the
free software, open access, and free culture alternatives that have
developed over this period.
In contrast, the periphery, in order to advance, relies on strategies
of “stealing” that bypass socioeconomic barriers by refusing to
submit to the harmonized regulation that sets the frame for global
economic exchange. The piracy of intellectual property or industrial
secrets thus creates a shadow system of exchange resisting the

55

56

asymmetries of development in the world economy. However, its
illegality serves as a pretext for the governments and companies of
the core to devise and impose further controls over the technosocial systems that facilitate these exchanges.
Both strategies develop specific politics—­a politics of reform, on
the one hand, and a politics of obfuscation and resistance, on the
other—­yet both are defensive politics that affirm the limitations
of what remains inside and what remains outside of the politically
legitimate.
The copyright industry giants of the past and the IT industry giants
of the present are thus currently sorting it out to whose greater
benefit will this new round of commodification work out. For those
who find themselves outside of the the camps of these two factions
of capital, there’s a window of opportunity, however, to reconceive
the mode of production of literature and science that has been
with us since the beginning of the print trade and the dawn of capitalism. It’s a matter of change, at the tail end of which ultimately
lies a dilemma: whether we’re going to live in a more equal or a
more unjust, a more commonised or a more commodified world.

Authorship, Law, and Legitimacy
Before we can talk of such structural transformation, the normative
question we expect to be asked is whether something that is considered a matter of law and juridical decision can be made a matter
of politics and political process. Let’s see.
Copyright has a fundamentally economic function—­to unambiguously establish individualized property in the products of creative
labor. A clear indication of this economic function is the substantive requirement of originality that the work is expected to have
in order to be copyrightable. Legal interpretations set a very low
standard on what counts as original, as their function is no more
than to demarcate one creative contribution from another. Once
a legal title is unambiguously assigned, there is a person holding

property with whose consent the contracting, commodification,
and marketing of the work can proceed.5 In that respect copyright
is not that different from the requirement of formal freedom that
is granted to a laborer to contract out their own labor-­power as a
commodity to capital, giving capital authorization to extract maximum productivity and appropriate the products of the laborer’s
labor.6 Copyright might be just a more efficient mechanism of
exploitation as it unfolds through selling of produced commodities
and not labor power. Art market obscures and mediates the
capital-­labor relation
When we talk today of illegal copying, we primarily mean an
infringement of the legal rights of authors and publishers. There’s an
immediate assumption that the infringing practice of illegal copying
and distribution falls under the domain of juridical sanction, that it is
a matter of law. Yet if we look to the history of copyright, the illegality
of copying was a political matter long before it became a legal one.
Publisher’s rights, author’s rights, and mechanisms of reputation—­
the three elements that are fundamental to the present-­day
copyright system—­all have their historic roots in the context of
absolutism and early capitalism in seventeenth-­and eighteenth-­
century Europe. Before publishers and authors were given a
temporary monopoly over the exploitation of their publications
instituted in the form of copyright, they were operating in a system
where they were forced to obtain a privilege to print books from
royal censors. The first printing privileges granted to publishers, in
early seventeenth-­century Great Britain,7 came with the responsibility of publishers to control what was being published and
disseminated in a growing body of printed matter that started to
reach the public in the aftermath of the invention of print and the
rise of the reading culture. The illegality in these early days of print
referred either to printing books without the permission of the
censor or printing books that were already published by another
printer in the territory where the censor held authority. The transition from the privilege tied to the publisher to the privilege tied to
the natural person of the author would unfold only later.

57

58

In the United Kingdom this transition occurred as the guild of
printers, Stationers’ Company, failed to secure the extension of its
printing monopoly and thus, in order to continue with its business,
decided to advocate the introduction of copyright for the authors
instead. This resulted in the passing of the Copyright Act of 1709,
also known as the Statute of Anne (Rose 2010). The censoring
authority and enterprising publishers now proceeded in lockstep to
isolate the author as the central figure in the regulation of literary
and scientific production. Not only did the author receive exclusive
rights to the work, the author was also made—­as Foucault has
famously analyzed (Foucault 1980, 124)—­the identifiable subject of
scrutiny, censorship, and political sanction by the absolutist state.
Although the Romantic author slowly took the center stage in
copyright regulations, economic compensation for the work would
long remain no more than honorary. Until well into the eighteenth
century, literary writing and creativity in general were regarded as
resulting from divine inspiration and not the individual genius of
the author. Writing was a work of honor and distinction, not something requiring an honest day’s pay.8 Money earned in the growing
printing industry mostly stayed in the pockets of publishers, while
the author received literally an honorarium, a flat sum that served
as a “token of esteem” (Woodmansee 1996, 42). It is only once
authors began to voice demands for securing their material and
political independence from patronage and authority that they also
started to make claims for rightful remuneration.
Thus, before it was made a matter of law, copyright was a matter of
politics and economy.

Copyright, Labor, and Economic Domination
The full-­blown affirmation of the Romantic author-­function marks
the historic moment where a compromise is established between
the right of publishers to the economic exploitation of works and
the right of authors to rightful compensation for those works. Economically, this redistribution from publishers to authors was made

possible by the expanding market for printed books in the eighteenth and nineteenth centuries, while politically this was catalyzed
by the growing desire for the autonomy of scientific and literary
production from the system of feudal patronage and censorship
in gradually liberalizing and modernizing capitalist societies. The
newfound autonomy of production was substantially coupled to
production specifically for the market. However, this irenic balance
could not last for very long. Once the production of culture and
science was subsumed under the exigencies of the generalized
market, it had to follow the laws of commodification and competition from which no form of commodity production can escape.
By the beginning of the twentieth century, copyright expanded to
a number of other forms of creativity, transcending its primarily
literary and scientific ambit and becoming part of the broader
set of intellectual property rights that are fundamental to the
functioning and positioning of capitalist enterprise. The corporatization of the production of culture and knowledge thus brought
about a decisive break from the Romantic model that singularized
authorship in the person of the author. The production of cultural
commodities nowadays involves a number of creative inputs from
both credited (but mostly unwaged) and uncredited (but mostly
waged) contributors. The “moral rights of the author,” a substantive
link between the work and the person of the author, are markedly
out of step with these realities, yet they still perform an important
function in the moral economy of reputation, which then serves as
the legitimation of copyright enforcement and monopoly. Moral
rights allow easy attribution; incentivize authors to subsidize
publishers by self-­financing their own work in the hope of topping
the sales charts, rankings, or indexes; and help markets develop
along winner-­takes-­all principles.
The level of concentration in industries primarily concerned with
various forms of intellectual property rights is staggering. The film
industry is a US$88 billion industry dominated by six major studios
(PwC 2015c). The recorded music industry is an almost US$20
billion industry dominated by only three major labels (PwC 2015b).

59

60

The publishing industry is a US$120 billion industry where the
leading ten companies earn in revenues more than the next forty
largest publishing groups (PwC 2015a; Wischenbart 2014).

The Oligopoly and Academic Publishing
Academic publishing in particular draws the state of play into stark
relief. It’s a US$10 billion industry dominated by five publishers and
financed up to 75 percent from library subscriptions. It’s notorious
for achieving extreme year-­on-­year profit margins—­in the case of
Reed Elsevier regularly over 30 percent, with Taylor and Francis,
Springer, Wiley-­Blackwell and Sage barely lagging behind (Larivière,
Haustein, and Mongeon 2015). Given that the work of contributing
authors is not paid but rather financed by their institutions (provided, that is, that they are employed at an institution) and that
these publications nowadays come mostly in the form of electronic
articles licensed under subscription for temporary use to libraries
and no longer sold as printed copies, the public interest could be
served at a much lower cost by leaving commercial closed-­access
publishers out of the equation entirely.
But that cannot be done, of course. The chief reason for this is that
the system of academic reputation and ranking based on publish-­
or-­perish principles is historically entangled with the business of
academic publishers. Anyone who doesn’t want to put their academic career at risk is advised to steer away from being perceived
as reneging on that not-­so-­tacit deal. While this is patently clear
to many in academia, opting for the alternative of open access
means not playing by the rules, and not playing by the rules can
have real-­life consequences, particularly for younger academics.
Early career scholars have to publish in prestigious journals if they
want to advance in the highly competitive and exclusive system of
academia (Kendzior 2012).
Copyright in academic publishing has thus become simply a mechanism of the direct transfer of economic power from producers to
publishers, giving publishers an instrument for maintaining their

stranglehold on the output of academia. But publishers also have
control over metrics and citation indexes, pandering to the authors
with better tools for maximizing their impact and self-­promotion.
Reputation and copyright are extortive instruments that publishers
can wield against authors and the public to prevent an alternative
from emerging.9
The state of the academic publishing business signals how the
“copyright industries” in general might continue to control the
field as their distribution model now transitions to streaming or
licensed-­access models. In the age of cloud computing, autonomous infrastructures run by communities of enthusiasts are
becoming increasingly a thing of the past. “Copyright industries,”
supported by the complicit legal system, now can pressure proxies
for these infrastructures, such as providers of server colocation,
virtual hosting, and domain-­name network services, to enforce
injunctions for them without ever getting involved in direct, costly
infringement litigation. Efficient shutdowns of precarious shadow
systems allow for a corporate market consolidation wherein the
majority of streaming infrastructures end up under the control of a
few corporations.

Illegal Yet Justified, Collective Civil
Disobedience, Politicizing the Legal
However, when companies do resort to litigation or get involved in
criminal proceedings, they can rest assured that the prosecution
and judicial system will uphold their interests over the right of
public to access culture and knowledge, even when the irrationality
of the copyright system lies in plain sight, as it does in the case of
academic publishing. Let’s look at two examples:
On January 6, 2011, Aaron Swartz, a prominent programmer
and hacktivist, was arrested by the MIT campus police and U.S.
Secret Service on charges of having downloaded a large number
of academic articles from the JSTOR repository. While JSTOR, with
whom Swartz reached a settlement and to whom he returned the

61

62

files, and, later, MIT, would eventually drop the charges, the federal
prosecution decided nonetheless to indict Swartz on thirteen
criminal counts, potentially leading to fifty years in prison and a
US$1 million fine. Under growing pressure by the prosecution
Swartz committed suicide on January 11, 2013.
Given his draconian treatment at the hands of the prosecution
and the absence of institutions of science and culture that would
stand up and justify his act on political grounds, much of Swartz’s
defense focused on trying to exculpate his acts, to make them less
infringing or less illegal than the charges brought against him had
claimed, a rational course of action in irrational circumstances.
However, this was unfortunately becoming an uphill battle as the
prosecution’s attention was accidentally drawn to a statement
written by Swartz in 2008 wherein he laid bare the dysfunctionality
of the academic publishing system. In his Guerrilla Open Access
Manifesto, he wrote: “The world’s entire scientific and cultural heritage, published over centuries in books and journals, is increasingly
being digitized and locked up by a handful of private corporations. . . . Forcing academics to pay money to read the work of their
colleagues? Scanning entire libraries but only allowing the folks at
Google to read them? Providing scientific articles to those at elite
universities in the First World, but not to children in the Global
South? It’s outrageous and unacceptable.” After a no-­nonsense
diagnosis followed an even more clear call to action: “We need
to download scientific journals and upload them to file sharing
networks. We need to fight for Guerilla Open Access” (Swartz 2008).
Where a system has failed to change unjust laws, Swartz felt, the
responsibility was on those who had access to make injustice a
thing of the past.
Whether Swartz’s intent actually was to release the JSTOR repository remains subject to speculation. The prosecution has never
proven that it was. In the context of the legal process, his call to
action was simply taken as a matter of law and not for what it
was—­a matter of politics. Yet, while his political action was pre-

empted, others have continued pursuing his vision by committing
small acts of illegality on a massive scale. In June 2015 Elsevier won
an injunction against Library Genesis, the largest illegal repository
of electronic books, journals, and articles on the Web, and its
subsidiary platform for accessing academic journals, Sci-­hub. A
voluntary and noncommercial project of anonymous scientists
mostly from Eastern Europe, Sci-­hub provides as of end of 2015
access to more than 41 million academic articles either stored
in its database or retrieved through bypassing the paywalls of
academic publishers. The only person explicitly named in Elsevier’s
lawsuit was Sci-­hub’s founder Alexandra Elbakyan, who minced no
words: “When I was working on my research project, I found out
that all research papers I needed for work were paywalled. I was
a student in Kazakhstan at the time and our university was not
subscribed to anything” (Ernesto 2015). Being a computer scientist,
she found the tools and services on the internet that allowed her to
bypass the paywalls. At first, she would make articles available on
internet forums where people would file requests for the articles
they needed, but eventually she automated the process, making
access available to everyone on the open web. “Thanks to Elsevier’s
lawsuit, I got past the point of no return. At this time I either have
to prove we have the full right to do this or risk being executed like
other ‘pirates’ . . . If Elsevier manages to shut down our projects or
force them into the darknet, that will demonstrate an important
idea: that the public does not have the right to knowledge. . . .
Everyone should have access to knowledge regardless of their
income or affiliation. And that’s absolutely legal. Also the idea
that knowledge can be a private property of some commercial
company sounds absolutely weird to me” (Ernesto 2015).
If the issue of infringement is to become political, a critical mass
of infringing activity has to be achieved, access technologically
organized, and civil disobedience collectively manifested. Only in
this way do the illegal acts stand a chance of being transformed
into the legitimate acts.

63

64

Where Law Was, there Politics Shall Be
And thus we have made a full round back to where we started. The
parallel development of liberalism, copyright, and capitalism has
resulted in a system demanding that the contemporary subject
act in accordance with two opposing tendencies: “more capitalist
than capitalist and more proletarian than proletariat” (Deleuze
and Guattari 1983, 34). Schizophrenia is, as Deleuze and Guattari
argue, a condition that simultaneously embodies two disjunctive
positions. Desire and blockage, flow and territory. Capitalism is
the constant decoding of social blockages and territorializations
aimed at liberating the production of desires and flows further
and further, only to oppose them at its extreme limit. It decodes
the old socius by means of private property and commodity
production, privatization and abstraction, the flow of wealth and
flows of workers (140). It allows contemporary subjects—­including
corporate entities such as the MIT Press or Sony—­to embrace their
contradictions and push them to their limits. But capturing them in
the orbit of the self-­expanding production of value, it stops them
at going beyond its own limit. It is this orbit that the law sanctions
in the present, recoding schizoid subjects into the inevitability of
capitalism. The result is the persistence of a capitalist reality antithetical to common interest—­commercial closed-­access academic
publishing—­and the persistence of a hyperproletariat—­an intellectual labor force that is too subsumed to organize and resist the
reality that thrives parasitically on its social function. It’s a schizoid
impasse sustained by a failed metaphor.
The revolutionary events of the Paris Commune of 1871, its mere
“existence” as Marx has called it,10 a brief moment of “communal
luxury” set in practice as Kristin Ross (2015) describes it, demanded
that, in spite of any circumstances and reservations, one takes a
side. And such is our present moment of truth.
Digital networks have expanded the potential for access and
created an opening for us to transform the production of knowledge and culture in the contemporary world. And yet they have
likewise facilitated the capacity of intellectual property industries

to optimize, to cut out the cost of printing and physical distribution.
Digitization is increasingly helping them to control access, expand
copyright, impose technological protection measures, consolidate
the means of distribution, and capture the academic valorization
process.
As the potential opening for universalizing access to culture and
knowledge created by digital networks is now closing, attempts at
private legal reform such as Creative Commons licenses have had
only a very limited effect. Attempts at institutional reform such as
Open Access publishing are struggling to go beyond a niche. Piracy
has mounted a truly disruptive opposition, but given the legal
repression it has met with, it can become an agent of change only if
it is embraced as a kind of mass civil disobedience. Where law was,
there politics shall be.
Many will object to our demand to replace the law with politicization. Transitioning from politics to law was a social achievement
as the despotism of political will was suppressed by legal norms
guaranteeing rights and liberties for authors; this much is true. But
in the face of the draconian, failed juridical rationality sustaining
the schizoid impasse imposed by economic despotism, these developments hold little justification. Thus we return once more to the
words of Aaron Swartz to whom we remain indebted for political
inspiration and resolve: “There is no justice in following unjust laws.
It’s time to come into the light and, in the grand tradition of civil
disobedience, declare our opposition to this private theft of public
culture. . . . With enough of us, around the world, we’ll not just send
a strong message opposing the privatization of knowledge—­we’ll
make it a thing of the past. Will you join us?” (Swartz 2008).

Notes
1

We initially named our project Public Library because we have developed it
as a technosocial project from a minimal definition that defines public library
as constituted by three elements: free access to books for every member of
a society, a library catalog, and a librarian (Mars, Zarroug and Medak, 2015).
However, this definition covers all public libraries and shadow libraries
complementing the work of public libraries in providing digital access. We have
thus decided to rename our project as Memory of the World, after our project’s

65

initial domain name. This is a phrase coined by Henri La Fontaine, whose men-

66

tion we found in Markus Krajewski’s Paper Machines (2011). It turned out that
UNESCO runs a project under the same name with the objective to preserve
valuable archives for the whole of humanity. We have appropriated that objective. Given that this change has happened since we drafted the initial version
of this text in 2015, we’ll call our project in this text with a double name Public
Library/Memory of the World.
2

Sony Pictures Entertainment became the owner of two (MGM, Columbia Pictures) out of eight Golden Age major movie studios (“Major Film Studio,” Wikipedia 2015).

3

In 2012 Sony Music Entertainment is one of the Big Three majors (“Record
Label,” Wikipedia 2015).

4

Since this anecdote was recounted by Marcell in his opening keynote in the
Terms of Media II conference at Brown University, we have received another
batch of takedown notices from the MIT Press. It seemed as no small irony,
because at the time the Terms of Media conference reader was rumored to be
distributed by the MIT Press.

5

“In law, authorship is a point of origination of a property right which, thereafter, like other property rights, will circulate in the market, ending up in the
control of the person who can exploit it most profitably. Since copyright serves
paradoxically to vest authors with property only to enable them to divest that
property, the author is a notion which needs only to be sustainable for an
instant” (Bently 1994).

6

For more on the formal freedom of the laborer to sell his labor-­power, see
chapter 6 of Marx’s Capital (1867).

7

For a more detailed account of the history of printing privilege in Great Britain,
but also the emergence of peer review out of the self-­censoring performed by
the Royal Academy and Académie de sciences in return for the printing privilege, see Biagioli 2002.

8

The transition of authorship from honorific to professional is traced in Woodmansee 1996.

9

Not all publishers are necessarily predatory. For instance, scholar-­led open-­
access publishers, such as those working under the banner of Radical Open
Access (http://radicaloa.disruptivemedia.org) have been experimenting with
alternatives to the dominant publishing models, workflows, and metrics, radicalizing the work of conventional open access, which has by now increasingly
become recuperated by big for-­profit publishers, who see in open access an
opportunity to assume the control over the economy of data in academia.
Some established academic publishers, too, have been open to experiments
that go beyond mere open access and are trying to redesign how academic
writing is produced, made accessible, and valorized. This essay has the good
fortune of appearing as a joint publication of two such publishers: Meson Press
and University of Minnesota Press.

10

“The great social measure of the Commune was its own working existence”
(Marx 1871).

References
Bently, Lionel. 1994. “Copyright and the Death of the Author in Literature and Law.”
The Modern Law Review 57, no. 6: 973–­86. Accessed January 2, 2018. doi:10.1111/
j.1468–­2230.1994.tb01989.x.
Biagioli, Mario. 2002. “From Book Censorship to Academic Peer Review.” Emergences:
Journal for the Study of Media & Composite Cultures 12, no. 1: 11–­45.
Bolter, Jay David, and Richard Grusin. 1999. Remediation: Understanding New Media.
Cambridge, Mass.: MIT Press.
Deleuze, Gilles, and Félix Guattari. 1983. Anti-­Oedipus: Capitalism and Schizophrenia.
Minneapolis: University of Minnesota Press.
Ernesto. 2015. “Sci-­Hub Tears Down Academia’s ‘Illegal’ Copyright Paywalls.” TorrentFreak, June 27. Accessed October 18, 2015. https://torrentfreak.com/sci-hub-tears
-down-academias-illegal-copyright-paywalls-150627/.
Foucault, Michel. 1980. “What Is an Author?” In Language, Counter-­Memory, Practice:
Selected Essays and Interviews, ed. Donald F. Bouchard, 113–­38. Ithaca, N.Y.: Cornell
University Press.
Groys, Boris. 2008. Art Power. Cambridge, Mass.: MIT Press.
Kendzior, Sarah. 2012. “Academic Paywalls Mean Publish and Perish.” Al Jazeera
English, October 2. Accessed October 18, 2015. http://www.aljazeera.com/indepth/
opinion/2012/10/20121017558785551.html.
Kiaer, Christina. 2005. Imagine No Possessions: The Socialist Objects of Russian Constructivism. Cambridge, Mass.: MIT Press.
Krajewski, Markus. 2011. Paper Machines: About Cards & Catalogs, 1548–­1929. Cambridge, Mass.: MIT Press.
Larivière, Vincent, Stefanie Haustein, and Philippe Mongeon. 2015. “The Oligopoly of
Academic Publishers in the Digital Era.” PLoS ONE 10, no. 6. Accessed January 2,
2018. doi:10.1371/journal.pone.0127502.
Mars, Marcell, Marar Zarroug, and Tomislav Medak. 2015. “Public Library (essay).” in
Public Library, ed. Marcell Mars and Tomislav Medak. Zagreb: Multimedia Institute
& What, how & for Whom/WHW.
Marx, Karl. 1867. Capital, Vol. 1. Available at: Marxists.org. Accessed April 9, 2017.
https://www.marxists.org/archive/marx/works/1867-c1/ch06.htm.
Marx, Karl. 1871. “The Civil War in France.” Available at: Marxists.org. Accessed April 9,
2017. https://www.marxists.org/archive/marx/works/1871/civil-war-france/.
McLuhan, Marshall. 1965. Understanding Media: The Extensions of Man. New York:
McGraw-­Hill.
Medina, Eden. 2011. Cybernetic Revolutionaries: Technology and Politics in Allende’s
Chile. Cambridge, Mass. MIT Press.
MIT Media Lab. 2017. “MIT Media Lab Disobedience Award.” Accessed 10 April 2017,
https://media.mit.edu/disobedience/.
MIT OpenCourseWare. n.d. “About OCW | MIT OpenCourseWare | Free Online
Course Materials.” Accessed October 28, 2015. http://ocw.mit.edu/about/.
One Laptop per Child. 2010. “One Laptop per Child (OLPC): Vision.” Accessed October
28, 2015. http://laptop.org/en/vision/.

67

68

Peterson, T. F., ed. 2011. Nightwork: A History of Hacks and Pranks at MIT. Cambridge,
Mass.: MIT Press.
Poulantzas, Nicos. 2008. The Poulantzas Reader: Marxism, Law, and the State. London:
Verso.
PwC. 2015a. “Book Publishing.” Accessed October 18, 2015. http://www.pwc.com/gx
/en/industries/entertainment-media/outlook/segment-insights/book-publishing
.html.
PwC. 2015b. “Filmed Entertainment.” Accessed October 18, 2015. http://www.pwc.com
/gx/en/industries/entertainment-media/outlook/segment-insights/filmed-enter
tainment.html.
PwC. 2015c. “Music: Growth Rates of Recorded and Live Music.” Accessed October 18,
2015. http://www.pwc.com/gx/en/global-entertainment-media-outlook/assets/
2015/music-key-insights-1-growth-rates-of-recorded-and-live-music.pdf.
Reid, Rob. 2012. “The Numbers behind the Copyright Math.” TED Blog, March 20.
Accessed October 28, 2015, http://blog.ted.com/the-numbers-behind-the
-copyright-math/.
Rose, Mark. 2010. “The Public Sphere and the Emergence of Copyright.” In Privilege
and Property, Essays on the History of Copyright, ed. Ronan Deazley, Martin Kretschmer, and Lionel Bently, 67–­88. Open Book Publishers.
Ross, Kristin. 2015. Communal Luxury: The Political Imaginary of the Paris Commune.
London: Verso.
Spieker, Sven. 2008. The Big Archive: Art from Bureaucracy. Cambridge, Mass.: MIT
Press.
Swartz, Aaron. 2008. “Guerilla Open Access Manifesto.” Internet Archive. Accessed
October 18, 2015. https://archive.org/stream/GuerillaOpenAccessManifesto/
Goamjuly2008_djvu.txt.
Tactical Media Files. 2017. “The Concept of Tactical Media.” Accessed May 4, 2017.
http://www.tacticalmediafiles.net/articles/44999.
Vismann, Cornelia. 2011. Medien der Rechtsprechung. Frankfurt a.M.: S. Fischer Verlag.
von Hippel, Eric. 2005. Democratizing Innovation. Cambridge, Mass.: MIT Press.
Wikipedia, the Free Encyclopedia. 2015a. “Major Film Studio.” Accessed January 2,
2018. https://en.wikipedia.org/w/index.php?title=Major_film_studio&oldid
=686867076.
Wikipedia, the Free Encyclopedia. 2015b. “Record Label.” Accessed January 2, 2018.
https://en.wikipedia.org/w/index.php?title=Record_label&oldid=685380090.
Wikipedia, the Free Encyclopedia. 2015c. “Sony Corp. of America v. Universal City
Studios, Inc.” Accessed January 2, 2018. https://en.wikipedia.org/w/index.php?
title=Sony_Corp._of_America_v._Universal_City_Studios,_Inc.&oldid=677390161.
Wischenbart, Rüdiger. 2015. “The Global Ranking of the Publishing Industry 2014.”
Wischenbart. Accessed October 18, 2015. http://www.wischenbart.com/upload/
Global-Ranking-of-the-Publishing-Industry_2014_Analysis.pdf.
Woodmansee, Martha. 1996. The Author, Art, and the Market: Rereading the History of
Aesthetics. New York: Columbia University Press.
World Wide Web Consortium. n.d.“W3C Mission.” Accessed October 28, 2015. http://
www.w3.org/Consortium/mission.



tactics in Mars & Medak 2017


Mars & Medak
Knowledge Commons and Activist Pedagogies
2017


KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES: FROM IDEALIST POSITIONS TO COLLECTIVE ACTIONS
Conversation with Marcell Mars and Tomislav Medak (co-authored with Ana Kuzmanic)

Marcell Mars is an activist, independent scholar, and artist. His work has been
instrumental in development of civil society in Croatia and beyond. Marcell is one
of the founders of the Multimedia Institute – mi2 (1999) (Multimedia Institute,
2016a) and Net.culture club MaMa in Zagreb (2000) (Net.culture club MaMa,
2016a). He is a member of Creative Commons Team Croatia (Creative Commons,
2016). He initiated GNU GPL publishing label EGOBOO.bits (2000) (Monoskop,
2016a), meetings of technical enthusiasts Skill sharing (Net.culture club MaMa,
2016b) and various events and gatherings in the fields of hackerism, digital
cultures, and new media art. Marcell regularly talks and runs workshops about
hacking, free software philosophy, digital cultures, social software, semantic web
etc. In 2011–2012 Marcell conducted research on Ruling Class Studies at Jan Van
Eyck in Maastricht, and in 2013 he held fellowship at Akademie Schloss Solitude
in Stuttgart. Currently, he is PhD researcher at the Digital Cultures Research Lab at
Leuphana Universität Lüneburg.
Tomislav Medak is a cultural worker and theorist interested in political
philosophy, media theory and aesthetics. He is an advocate of free software and
free culture, and the Project Lead of the Creative Commons Croatia (Creative
Commons, 2016). He works as coordinator of theory and publishing activities at
the Multimedia Institute/MaMa (Zagreb, Croatia) (Net.culture club MaMa, 2016a).
Tomislav is an active contributor to the Croatian Right to the City movement
(Pravo na grad, 2016). He interpreted to numerous books into Croatian language,
including Multitude (Hardt & Negri, 2009) and A Hacker Manifesto (Wark,
2006c). He is an author and performer with the internationally acclaimed Zagrebbased performance collective BADco (BADco, 2016). Tomislav writes and talks
about politics of technological development, and politics and aesthetics.
Tomislav and Marcell have been working together for almost two decades.
Their recent collaborations include a number of activities around the Public Library
project, including HAIP festival (Ljubljana, 2012), exhibitions in
Württembergischer Kunstverein (Stuttgart, 2014) and Galerija Nova (Zagreb,
2015), as well as coordinated digitization projects Written-off (2015), Digital
Archive of Praxis and the Korčula Summer School (2016), and Catalogue of
Liberated Books (2013) (in Monoskop, 2016b).
243

CHAPTER 12

Ana Kuzmanic is an artist based in Zagreb and Associate Professor at the
Faculty of Civil Engineering, Architecture and Geodesy at the University in Split
(Croatia), lecturing in drawing, design and architectural presentation. She is a
member of the Croatian Association of Visual Artists. Since 2007 she held more
than a dozen individual exhibitions and took part in numerous collective
exhibitions in Croatia, the UK, Italy, Egypt, the Netherlands, the USA, Lithuania
and Slovenia. In 2011 she co-founded the international artist collective Eastern
Surf, which has “organised, produced and participated in a number of projects
including exhibitions, performance, video, sculpture, publications and web based
work” (Eastern Surf, 2017). Ana's artwork critically deconstructs dominant social
readings of reality. It tests traditional roles of artists and viewers, giving the
observer an active part in creation of artwork, thus creating spaces of dialogue and
alternative learning experiences as platforms for emancipation and social
transformation. Grounded within a postdisciplinary conceptual framework, her
artistic practice is produced via research and expression in diverse media located at
the boundaries between reality and virtuality.
ABOUT THE CONVERSATION

I have known Marcell Mars since student days, yet our professional paths have
crossed only sporadically. In 2013 I asked Marcell’s input about potential
interlocutors for this book, and he connected me to McKenzie Wark. In late 2015,
when we started working on our own conversation, Marcell involved Tomislav
Medak. Marcell’s and Tomislav’s recent works are closely related to arts, so I
requested Ana Kuzmanic’s input in these matters. Since the beginning of the
conversation, Marcell, Tomislav, Ana, and I occasionally discussed its generalities
in person. Yet, the presented conversation took place in a shared online document
between November 2015 and December 2016.
NET.CULTURE AT THE DAWN OF THE CIVIL SOCIETY

Petar Jandrić & Ana Kuzmanic (PJ & AK): In 1999, you established the
Multimedia Institute – mi2 (Multimedia Institute, 2016a); in 2000, you established
the Net.culture club MaMa (both in Zagreb, Croatia). The Net.culture club MaMa
has the following goals:
To promote innovative cultural practices and broadly understood social
activism. As a cultural center, it promotes wide range of new artistic and
cultural practices related in the first place to the development of
communication technologies, as well as new tendencies in arts and theory:
from new media art, film and music to philosophy and social theory,
publishing and cultural policy issues.
As a community center, MaMa is a Zagreb’s alternative ‘living room’ and
a venue free of charge for various initiatives and associations, whether they
are promoting minority identities (ecological, LBGTQ, ethnic, feminist and

244

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

others) or critically questioning established social norms. (Net.culture club
MaMa, 2016a)
Please describe the main challenges and opportunities from the dawn of Croatian
civil society. Why did you decide to establish the Multimedia Institute – mi2 and
the Net.culture club MaMa? How did you go about it?
Marcell Mars & Tomislav Medak (MM & TM): The formative context for
our work had been marked by the process of dissolution of Yugoslavia, ensuing
civil wars, and the rise of authoritarian nationalisms in the early 1990s. Amidst the
general turmoil and internecine bloodshed, three factors would come to define
what we consider today as civil society in the Croatian context. First, the newly
created Croatian state – in its pursuit of ethnic, religious and social homogeneity –
was premised on the radical exclusion of minorities. Second, the newly created
state dismantled the broad institutional basis of social and cultural diversity that
existed under socialism. Third, the newly created state pursued its own nationalist
project within the framework of capitalist democracy. In consequence, politically
undesirable minorities and dissenting oppositional groups were pushed to the
fringes of society, and yet, in keeping with the democratic system, had to be
allowed to legally operate outside of the state, its loyal institutions and its
nationalist consensus – as civil society. Under the circumstances of inter-ethnic
conflict, which put many people in direct or indirect danger, anti-war and human
rights activist groups such as the Anti-War Campaign provided an umbrella under
which political, student and cultural activists of all hues and colours could find a
common context. It is also within this context that the high modernism of cultural
production from the Yugoslav period, driven out from public institutions, had
found its recourse and its continuity.
Our loose collective, which would later come together around the Multimedia
Institute and MaMa, had been decisively shaped by two circumstances. The first
was participation of the Anti-War Campaign, its BBS network ZaMir (Monoskop,
2016c) and in particular its journal Arkzin, in the early European network culture.
Second, the Open Society Institute, which had financed much of the alternative and
oppositional activities during the 1990s, had started to wind down its operations
towards end of the millennium. As the Open Society Institute started to spin off its
diverse activities into separate organizations, giving rise to the Croatian Law
Center, the Center for Contemporary Art and the Center for Drama Art, activities
related to Internet development ended up with the Multimedia Institute. The first
factor shaped us as activists and early adopters of critical digital culture, and the
second factor provided us with an organizational platform to start working
together. In 1998 Marcell was the first person invited to work with the Multimedia
Institute. He invited Vedran Gulin and Teodor Celakoski, who in turn invited other
people, and the group organically grew to its present form.
Prior to our coming together around the Multimedia Institute, we have been
working on various projects such as setting up the cyber-culture platform Labinary
in the space run by the artist initiative Labin Art Express in the former miner town
of Labin located in the north-western region of Istria. As we started working
245

CHAPTER 12

together, however, we began to broaden these activities and explore various
opportunities for political and cultural activism offered by digital networks. One of
the early projects was ‘Radioactive’ – an initiative bringing together a broad group
of activists, which was supposed to result in a hybrid Internet/FM radio. The radio
never arrived into being, yet the project fostered many follow-up activities around
new media and activism in the spirit of ‘don’t hate the media, become the media.’
In these early days, our activities had been strongly oriented towards technological
literacy and education; also, we had a strong interest in political theory and
philosophy. Yet, the most important activity at that time was opening the
Net.culture club MaMa in Zagreb in 2000 (Net.culture club MaMa, 2016a).
PJ & AK: What inspired you to found the Net.culture club MaMa?
MM & TM: We were not keen on continuing the line of work that the
Multimedia Institute was doing under the Open Society Institute, which included,
amongst other activities, setting up the first non-state owned Internet service
provider ZamirNet. The growing availability of Internet access and computer
hardware had made the task of helping political, cultural and media activists get
online less urgent. Instead, we thought that it would be much more important to
open a space where those activists could work together. At the brink of the
millennium, institutional exclusion and access to physical resources (including
space) needed for organizing, working together and presenting that work was a
pressing problem. MaMa was one of the only three independent cultural spaces in
Zagreb – capital city of Croatia, with almost one million inhabitants! The Open
Society Institute provided us with a grant to adapt a former downtown leather-shop
in the state of disrepair and equip it with latest technology ranging from servers to
DJ decks. These resources were made available to all members of the general
public free of charge. Immediately, many artists, media people, technologists, and
political activists started initiating own programs in MaMa. Our activities ranged
from establishing art servers aimed at supporting artistic and cultural projects on
the Internet (Monoskop, 2016d) to technology-related educational activities,
cultural programs, and publishing. By 2000, nationalism had slowly been losing its
stranglehold on our society, and issues pertaining to capitalist globalisation had
arrived into prominence. At MaMa, the period was marked by alter-globalization,
Indymedia, web development, East European net.art and critical media theory.
The confluence of these interests and activities resulted in many important
developments. For instance, soon after the opening of MaMa in 2000, a group of
young music producers and enthusiasts kicked off a daily music program with live
acts, DJ sessions and meetings to share tips and tricks about producing electronic
music. In parallel, we had been increasingly drawn to free software and its
underlying ethos and logic. Yugoslav legacy of social ownership over means of
production and worker self-management made us think how collectivized forms of
cultural production, without exclusions of private property, could be expanded
beyond the world of free software. We thus talked some of our musician friends
into opening the free culture label EGOBOO.bits and publishing their music,
together with films, videos and literary texts of other artists, under the GNU
General Public License. The EGOBOO.bits project had soon become uniquely
246

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

successful: producers such as Zvuk broda, Blashko, Plazmatick, Aesqe, No Name
No Fame, and Ghetto Booties were storming the charts, the label gradually grew to
fifty producers and formations, and we had the artists give regular workshops in
DJ-ing, sound editing, VJ-ing, video editing and collaborative writing at schools
and our summer camp Otokultivator. It inspired us to start working on alternatives
to the copyright regime and on issues of access to knowledge and culture.
PJ & AK: The civil society is the collective conscious, which provides leverage
against national and corporate agendas and serves as a powerful social corrective.
Thus, at the outbreak of the US invasion to Iraq, Net.culture club MaMa rejected a
$100 000 USAID grant because the invasion was:
a) a precedent based on the rationale of pre-emptive war, b) being waged in
disregard of legitimate processes of the international community, and c)
guided by corporate interests to control natural resources (Multimedia
Institute, 2003 in Razsa, 2015: 82).
Yet, only a few weeks later, MaMa accepted a $100 000 grant from the German
state – and this provoked a wide public debate (Razsa, 2015; Kršić, 2003; Stubbs,
2012).
Now that the heat of the moment has gone down, what is your view to this
debate? More generally, how do you decide whose money to accept and whose
money to reject? How do you decide where to publish, where to exhibit, whom to
work with? What is the relationship between idealism and pragmatism in your
work?
MM & TM: Our decision seems justified yet insignificant in the face of the
aftermath of that historical moment. The unilateral decision of US and its allies to
invade Iraq in March 2003 encapsulated both the defeat of global protest
movements that had contested the neoliberal globalisation since the early 1990s
and the epochal carnage that the War on Terror, in its never-ending iterations, is
still reaping today. Nowadays, the weaponized and privatized security regime
follows the networks of supply chains that cut across the logic of borders and have
become vital both for the global circuits of production and distribution (see Cowen,
2014). For the US, our global policeman, the introduction of unmanned weaponry
and all sorts of asymmetric war technologies has reduced the human cost of war
down to zero. By deploying drones and killer robots, it did away with the
fundamental reality check of own human casualties and made endless war
politically plausible. The low cost of war has resulted in the growing side-lining of
international institutions responsible for peaceful resolution of international
conflicts such as the UN.
Our 2003 decision carried hard consequences for the organization. In a capitalist
society, one can ensure wages either by relying on the market, or on the state, or on
private funding. The USAID grant was our first larger grant after the initial spinoff money from the Open Society Institute, and it meant that we could employ
some people from our community over the period of next two years. Yet at the
same time, the USAID had become directly involved in Iraq, aiding the US forces
and various private contractors such as Halliburton in the dispossession and
247

CHAPTER 12

plunder of the Iraqi economy. Therefore, it was unconscionable to continue
receiving money from them. In light of its moral and existential weight, the
decision to return the money thus had to be made by the general assembly of our
association.
People who were left without wages were part and parcel of the community that
we had built between 2000 and 2003, primarily through Otokultivator Summer
Camps and Summer Source Camp (Tactical Tech Collective, 2016). The other
grant we would receive later that year, from the Federal Cultural Foundation of the
German government, was split amongst a number of cultural organizations and
paid for activities that eventually paved the way for Right to the City (Pravo na
grad, 2016). However, we still could not pay the people who decided to return
USAID money, so they had to find other jobs. Money never comes without
conditionalities, and passing judgements while disregarding specific economic,
historic and organizational context can easily lead to apolitical moralizing.
We do have certain principles that we would not want to compromise – we do
not work with corporations, we are egalitarian in terms of income, our activities are
free for the public. In political activities, however, idealist positions make sense
only for as long as they are effective. Therefore, our idealism is through and
through pragmatic. It is in the similar manner that we invoke the ideal of the
library. We are well aware that reality is more complex than our ideals. However,
the collective sense of purpose inspired by an ideal can carry over into useful
collective action. This is the core of our interest …
PJ & AK: There has been a lot of water under the bridge since the 2000s. From
a ruined post-war country, Croatia has become an integral part of the European
Union – with all associated advantages and problems. What are the main today’s
challenges in maintaining the Multimedia Institute and its various projects? What
are your future plans?
MM & TM: From the early days, Multimedia Institute/MaMa took a twofold
approach. It has always supported people working in and around the organization
in their heterogeneous interests including but not limited to digital technology and
information freedoms, political theory and philosophy, contemporary digital art,
music and cinema. Simultaneously, it has been strongly focused to social and
institutional transformation.
The moment zero of Croatian independence in 1991, which was marked by war,
ethnic cleansing and forceful imposition of contrived mono-national identity, saw
the progressive and modernist culture embracing the political alternative of antiwar movement. It is within these conditions, which entailed exclusion from access
to public resources, that the Croatian civil society had developed throughout the
1990s. To address this denial of access to financial and spatial resources to civil
society, since 2000 we have been organizing collective actions with a number of
cultural actors across the country to create alternative routes for access to resources
– mutual support networks, shared venues, public funding, alternative forms of
funding. All the while, that organizational work has been implicitly situated in an
understanding of commons that draws on two sources – the social contract of the
free software community, and the legacy of social ownership under socialism.
248

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

Later on, this line of work has been developed towards intersectional struggles
around spatial justice and against privatisation of public services that coalesced
around the Right to the City movement (2007 till present) (Pravo na grad, 2016)
and the 2015 Campaign against the monetization of the national highway network.
In early 2016, with the arrival of the short-lived Croatian government formed by
a coalition of inane technocracy and rabid right wing radicals, many institutional
achievements of the last fifteen years seemed likely to be dismantled in a matter of
months. At the time of writing this text, the collapse of broader social and
institutional context is (again) an imminent threat. In a way, our current situation
echoes the atmosphere of Yugoslav civil wars in 1990s. Yet, the Croatian turn to
the right is structurally parallel to recent turn to the right that takes place in most
parts of Europe and the world at large. In the aftermath of the global neoliberal
race to the bottom and the War on Terror, the disenfranchised working class vents
its fears over immigration and insists on the return of nationalist values in various
forms suggested by irresponsible political establishments. If they are not spared the
humiliating sense of being outclassed and disenfranchised by the neoliberal race to
the bottom, why should they be sympathetic to those arriving from the
impoverished (semi)-periphery or to victims of turmoil unleashed by the endless
War on Terror? If globalisation is reducing their life prospects to nothing, why
should they not see the solution to their own plight in the return of the regime of
statist nationalism?
At the Multimedia Institute/MaMa we intend to continue our work against this
collapse of context through intersectionalist organizing and activism. We will
continue to do cultural programs, publish books, and organise the Human Rights
Film Festival. In order to articulate, formulate and document years of practical
experience, we aim to strengthen our focus on research and writing about cultural
policy, technological development, and political activism. Memory of the
World/Public Library project will continue to develop alternative infrastructures
for access, and develop new and existing networks of solidarity and public
advocacy for knowledge commons.
LOCAL HISTORIES AND GLOBAL REALITIES

PJ & AK: Your interests and activities are predominantly centred around
information and communication technologies. Yet, a big part of your social
engagement takes place in Eastern Europe, which is not exactly on the forefront of
technological innovation. Can you describe the dynamics of working from the
periphery around issues developed in global centres of power (such as the Silicon
Valley)?
MM & TM: Computers in their present form had been developed primarily in
the Post-World War II United States. Their development started from the military
need to develop mathematics and physics behind the nuclear weapons and counterair defense, but soon it was combined with efforts to address accounting, logistics
and administration problems in diverse fields such as commercial air traffic,
governmental services, banks and finances. Finally, this interplay of the military
249

CHAPTER 12

and the economy was joined by enthusiasts, hobbyists, and amateurs, giving the
development of (mainframe, micro and personal) computer its final historical
blueprint. This story is written in canonical computing history books such as The
Computer Boys Take Over: Computers, Programmers, and the Politics of
Technical Expertise. There, Nathan Ensmenger (2010: 14) writes: “the term
computer boys came to refer more generally not simply to actual computer
specialists but rather to the whole host of smart, ambitious, and technologically
inclined experts that emerged in the immediate postwar period.”
Very few canonical computing history books cover other histories. But when
that happens, we learn a lot. Be that Slava Gerovitch’s From Newspeak to
Cyberspeak (2002), which recounts the history of Soviet cybernetics, or Eden
Medina’s Cybernetic Revolutionaries (2011), which revisits the history of socialist
cybernetic project in Chile during Allende’s government, or the recent book by
Benjamin Peters How Not to Network a Nation (2016), which describes the history
of Soviet development of Internet infrastructure. Many (other) histories are yet to
be heard and written down. And when these histories get written down, diverse
things come into view: geopolitics, class, gender, race, and many more.
With their witty play and experiments with the medium, the early days of the
Internet were highly exciting. Big corporate websites were not much different from
amateur websites and even spoofs. A (different-than-usual) proximity of positions
of power enabled by the Internet allowed many (media-art) interventions, (rebirth
of) manifestos, establishment of (pseudo)-institutions … In these early times of
Internet’s history and geography, (the Internet subculture of) Eastern Europe
played a very important part. Inspired by Alexei Shulgin, Lev Manovich wrote ‘On
Totalitarian Interactivity’ (1996) where he famously addressed important
differences between understanding of the Internet in the West and the East. For the
West, claims Manovich, interactivity was a perfect vehicle for the ideas of
democracy and equality. For the East, however, interactivity was merely another
form of (media) manipulation. Twenty years later, it seems that Eastern Europe
was well prepared for what the Internet would become today.
PJ & AK: The dominant (historical) narrative of information and
communication technologies is predominantly based in the United States.
However, Silicon Valley is not the only game in town … What are the main
differences between approaches to digital technologies in the US and in Europe?
MM & TM: In the ninties, the lively European scene, which equally included
the East Europe, was the centre of critical reflection on the Internet and its
spontaneous ‘Californian ideology’ (Barbrook & Cameron, 1996). Critical culture
in Europe and its Eastern ‘countries in transition’ had a very specific institutional
landscape. In Western Europe, art, media, culture and ‘post-academic’ research in
humanities was by and large publicly funded. In Eastern Europe, development of
the civil society had been funded by various international foundations such as the
Open Society Institute aka the Soros Foundation. Critical new media and critical
art scene played an important role in that landscape. A wide range of initiatives,
medialabs, mailing lists, festivals and projects like Next5minutes (Amsterdam/
Rotterdam), Nettime & Syndicate (mailing lists), Backspace & Irational.org
250

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

(London), Ljudmila (Ljubljana), Rixc (Riga), C3 (Budapest) and others constituted
a loose network of researchers, theorists, artists, activists and other cultural
workers.
This network was far from exclusively European. It was very well connected to
projects and initiatives from the United States such as Critical Art Ensemble,
Rhizome, and Thing.net, to projects in India such as Sarai, and to struggles of
Zapatistas in Chiapas. A significant feature of this loose network was its mutually
beneficial relationship with relevant European art festivals and institutions such as
Documenta (Kassel), Transmediale/HKW (Berlin) or Ars Electronica (Linz). As a
rule of thumb, critical new media and art could only be considered in a conceptual
setup of hybrid institutions, conferences, forums, festivals, (curated) exhibitions
and performances – and all of that at once! The Multimedia Institute was an active
part of that history, so it is hardly a surprise that the Public Library project took a
similar path of development and contextualization.
However, European hacker communities were rarely hanging out with critical
digital culture crowds. This is not the place to extensively present the historic
trajectory of different hacker communities, but risking a gross simplification here
is a very short genealogy. The earliest European hacker association was the
German Chaos Computer Club (CCC) founded in 1981. Already in the early
1980s, CCC started to publicly reveal (security) weaknesses of corporate and
governmental computer systems. However, their focus on digital rights, privacy,
cyberpunk/cypherpunk, encryption, and security issues prevailed over other forms
of political activism. The CCC were very successful in raising issues, shaping
public discussions, and influencing a wide range of public actors from digital rights
advocacy to political parties (such as Greens and Pirate Party). However, unlike the
Italian and Spanish hackers, CCC did not merge paths with other social and/or
political movements. Italian and Spanish hackers, for instance, were much more
integral to autonomist/anarchist, political and social movements, and they have
kept this tradition until the present day.
PJ & AK: Can you expand this analysis to Eastern Europe, and ex-Yugoslavia
in particular? What were the distinct features of (the development of) hacker
culture in these areas?
MM & TM: Continuing to risk a gross simplification in the genealogy, Eastern
European hacker communities formed rather late – probably because of the
turbulent economic and political changes that Eastern Europe went through after
1989.
In MaMa, we used to run the programme g33koskop (2006–2012) with a goal to
“explore the scope of (term) geek” (Multimedia Institute, 2016b). An important
part of the program was to collect stories from enthusiasts, hobbyists, or ‘geeks’
who used to be involved in do-it-yourself communities during early days of
(personal) computing in Yugoslavia. From these makers of first 8-bit computers,
editors of do-it-yourself magazines and other early day enthusiasts, we could learn
that technical and youth culture was strongly institutionally supported (e.g. with
nation-wide clubs called People’s Technics). However, the socialist regime did not
adequately recognize the importance and the horizon of social changes coming
251

CHAPTER 12

from (mere) education and (widely distributed) use of personal computers. Instead,
it insisted on an impossible mission of own industrial computer production in order
to preserve autonomy on the global information technology market. What a
horrible mistake … To be fair, many other countries during this period felt able to
achieve own, autonomous production of computers – so the mistake has reflected
the spirit of the times and the conditions of uneven economic and scientific
development.
Looking back on the early days of computing in former Yugoslavia, many geeks
now see themselves as social visionaries and the avant-garde. During the 1990s
across the Eastern Europe, unfortunately, they failed to articulate a significant
political agenda other than fighting the monopoly of telecom companies. In their
daily lives, most of these people enjoyed opportunities and privileges of working in
a rapidly growing information technology market. Across the former Yugoslavia,
enthusiasts had started local Linux User Groups: HULK in Croatia, LUGOS in
Slovenia, LUGY in Serbia, Bosnia and Hercegovina, and Macedonia. In the spirit
of their own times, many of these groups focused on attempts to convince the
business that free and open source software (at the time GNU/Linux, Apache,
Exim …) was a viable IT solution.
PJ & AK: Please describe further developments in the struggle between
proponents of proprietary software and the Free Software Movement.
MM & TM: That was the time before Internet giants such as Google, Amazon,
eBay or Facebook built their empires on top of Free/Libre/Open Source Software.
GNU General Public Licence, with its famous slogan “free as in free speech, not
free as in free beer” (Stallman, 2002), was strong enough to challenge the property
regime of the world of software production. Meanwhile, Silicon Valley
experimented with various approaches against the challenge of free software such
as ‘tivoizations’ (systems that incorporate copyleft-based software but impose
hardware restrictions to software modification), ‘walled gardens’ (systems where
carriers or service providers control applications, content and media, while
preventing them from interacting with the wider Internet ecosystem), ‘software-asa-service’ (systems where software is hosted centrally and licensed through
subscription). In order to support these strategies of enclosure and turn them into
profit, Silicon Valley developed investment strategies of venture capital or
leveraged buyouts by private equity to close the proprietary void left after the
success of commons-based peer production projects, where a large number of
people develop software collaboratively over the Internet without the exclusion by
property (Benkler, 2006).
There was a period when it seemed that cultural workers, artists and hackers
would follow the successful model of the Free Software Movement and build a
universal commons-based platform for peer produced, shared and distributed
culture, art, science and knowledge – that was the time of the Creative Commons
movement. But that vision never materialized. It did not help, either, that start-ups
with no business models whatsoever (e.g. De.lic.io.us (bookmarks), Flickr
(photos), Youtube (videos), Google Reader (RSS aggregator), Blogspot, and
others) were happy to give their services for free, let contributors use Creative
252

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

Commons licences (mostly on the side of licenses limiting commercial use and
adaptations), let news curators share and aggregate relevant content, and let Time
magazine claim that “You” (meaning “All of us”) are The Person of the Year
(Time Magazine, 2006).
PJ & AK: Please describe the interplay between the Free Software Movement
and the radically capitalist Silicon Valley start-up culture, and place it into the
larger context of political economy of software development. What are its
consequences for the hacker movement?
MM & TM: Before the 2008 economic crash, in the course of only few years,
most of those start-ups and services had been sold out to few business people who
were able to monetize their platforms, users and usees (mostly via advertisement)
or crowd them out (mostly via exponential growth of Facebook and its ‘magic’
network effect). In the end, almost all affected start-ups and services got shut down
(especially those bought by Yahoo). Nevertheless, the ‘golden’ corporate start-up
period brought about a huge enthusiasm and the belief that entrepreneurial spirit,
fostered either by an individual genius or by collective (a.k.a. crowd) endeavour,
could save the world. During that period, unsurprisingly, the idea of hacker
labs/spaces exploded.
Fabulous (self)replicating rapid prototypes, 3D printers, do-it-yourself, the
Internet of Things started to resonate with (young) makers all around the world.
Unfortunately, GNU GPL (v.3 at the time) ceased to be a priority. The
infrastructure of free software had become taken for granted, and enthusiastic
dancing on the shoulders of giants became the most popular exercise. Rebranding
existing Unix services (finger > twitter, irc > slack, talk > im), and/or designing the
‘last mile’ of user experience (often as trivial as adding round corners to the
buttons), would often be a good enough reason to enclose the project, do the
slideshow pitch, create a new start-up backed up by an angel investor, and hope to
win in the game of network effect(s).
Typically, software stack running these projects would be (almost) completely
GNU GPL (server + client), but parts made on OSX (endorsed for being ‘true’
Unix under the hood) would stay enclosed. In this way, projects would shift from
the world of commons to the world of business. In order to pay respect to the open
source community, and to keep own reputation of ‘the good citizen,’ many
software components would get its source code published on GitHub – which is a
prime example of that game of enclosure in its own right. Such developments
transformed the hacker movement from a genuine political challenge to the
property regime into a science fiction fantasy that sharing knowledge while
keeping hackers’ meritocracy regime intact could fix all world’s problems – if only
we, the hackers, are left alone to play, optimize, innovate and make that amazing
technology!
THE SOCIAL LIFE OF DIGITAL TECHNOLOGIES

PJ & AK: This brings about the old debate between technological determinism
and social determinism, which never seems to go out of fashion. What is your take,
253

CHAPTER 12

as active hackers and social activists, on this debate? What is the role of
(information) technologies in social development?
MM & TM: Any discussion of information technologies and social
development requires the following parenthesis: notions used for discussing
technological development are shaped by the context of parallel US hegemony
over capitalist world-system and its commanding role in the development of
information technologies. Today’s critiques of the Internet are far from celebration
of its liberatory, democratizing potential. Instead, they often reflect frustration over
its instrumental role in the expansion of social control. Yet, the binary of freedom
and control (Chun, 2008), characteristic for ideological frameworks pertaining to
liberal capitalist democracies, is increasingly at pains to explain what has become
evident with the creeping commercialization and concentration of market power in
digital networks. Information technologies are no different from other generalpurpose technologies on which they depend – such as mass manufacture, logistics,
or energy systems.
Information technologies shape capitalism – in return, capitalism shapes
information technologies. Technological innovation is driven by interests of
investors to profit from new commodity markets, and by their capacity to optimize
and increase productivity of other sectors of economy. The public has some
influence over development of information technologies. In fact, publicly funded
research and development has created and helped commercialize most of the
fundamental building blocks of our present digital infrastructures ranging from
microprocessors, touch-screens all the way to packet switching networks
(Mazzucato, 2013). However, public influence on commercially matured
information technologies has become limited, driven by imperatives of
accumulation and regulatory hegemony of the US.
When considering the structural interplay between technological development
and larger social systems, we cannot accept the position of technological
determinism – particularly not in the form of Promethean figures of enterpreneurs,
innovators and engineers who can solve the problems of the world. Technologies
are shaped socially, yet the position of outright social determinism is inacceptable
either. The reproduction of social relations depends on contingencies of
technological innovation, just as the transformation of social relations depends on
contingencies of actions by individuals, groups and institutions. Given the
asymmetries that exist between the capitalist core and the capitalist periphery, from
which we hail, strategies for using technologies as agents of social change differ
significantly.
PJ & AK: Based on your activist experience, what is the relationship between
information technologies and democracy?
MM & TM: This relation is typically discussed within the framework of
communicative action (Habermas, 1984 [1981], 1987 [1981]) which describes how
the power to speak to the public has become radically democratized, how digital
communication has coalesced into a global public sphere, and how digital
communication has catalysed the power of collective mobilization. Information
technologies have done all that – but the framework of communicative action
254

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

describes only a part of the picture. Firstly, as Jodi Dean warns us in her critique of
communicative capitalism (Dean, 2005; see also Dean, 2009), the self-referential
intensity of communication frequently ends up as a substitute for the hard (and
rarely rewarding) work of political organization. Secondly, and more importantly,
Internet technologies have created the ‘winner takes all’ markets and benefited
more highly skilled workforce, thus helping to create extreme forms of economic
inequality (Brynjolfsson & McAfee, 2011). Thus, in any list of world’s richest
people, one can find an inordinate number of entrepreneurs from information
technology sector. This feeds deeply into neoliberal transformation of capitalist
societies, with growing (working and unemployed) populations left out of social
welfare which need to be actively appeased or policed. This is the structural
problem behind liberal democracies, electoral successes of the radical right, and
global “Trumpism” (Blyth, 2015). Intrinsic to contemporary capitalism,
information technologies reinforce its contradictions and pave its unfortunate trail
of destruction.
PJ & AK: Access to digital technologies and digital materials is dialectically
intertwined with human learning. For instance, Stallman’s definition of free
software directly addresses this issue in two freedoms: “Freedom 1: The freedom
to study how the program works, and change it to make it do what you wish,” and
“Freedom 3: The freedom to improve the program, and release your improvements
(and modified versions in general) to the public, so that the whole community
benefits” (Stallman, 2002: 43). Please situate the relationship between access and
learning in the contemporary context.
MM & TM: The relationships between digital technologies and education are
marked by the same contradictions and processes of enclosure that have befallen
the free software. Therefore, Eastern European scepticism towards free software is
equally applicable to education. The flip side of interactivity is audience
manipulation; the flip side of access and availability is (economic) domination.
Eroded by raising tuitions, expanding student debt, and poverty-level wages for
adjunct faculty, higher education is getting more and more exclusive. However,
occasional spread of enthusiasm through ideas such as MOOCs does not bring
about more emancipation and equality. While they preach loudly about unlimited
access for students at the periphery, neoliberal universities (backed up by venture
capital) are actually hoping to increase their recruitment business (models).
MOOCs predominantly serve members of privileged classes who already have
access to prestige universities, and who are “self-motivated, self-directed, and
independent individuals who would push to succeed anywhere” (Konnikova,
2014). It is a bit worrying that such rise of inequality results from attempts to
provide materials freely to everyone with Internet access!
The question of access to digital books for public libraries is different. Libraries
cannot afford digital books from world’s largest publishers (Digitalbookworld,
2012), and the small amount of already acquired e-books must destroyed after only
twenty six lendings (Greenfield, 2012). Thus, the issue of access is effectively left
to competition between Amazon, Google, Apple and other companies. The state of
affairs in scientific publishing is not any better. As we wrote in the collective open
255

CHAPTER 12

letter ‘In solidarity with Library Genesis and Sci-Hub’ (Custodians.online, 2015),
five for-profit publishers (Elsevier, Springer, Wiley-Blackwell, Taylor & Francis
and Sage) own more than half of all existing databases of academic material, which
are licensed at prices so scandalously high that even Harvard, the richest university
of the Global North, has complained that it cannot afford them any longer. Robert
Darnton, the past director of Harvard Library, says: “We faculty do the research,
write the papers, referee papers by other researchers, serve on editorial boards, all
of it for free … and then we buy back the results of our labor at outrageous prices.”
For all the work supported by public money benefiting scholarly publishers,
particularly the peer review that grounds their legitimacy, prices of journal articles
prohibit access to science to many academics – and all non-academics – across the
world, and render it a token of privilege (Custodians.online, 2015).
PJ & AK: Please describe the existing strategies for struggle against these
developments. What are their main strengths and weaknesses?
MM & TM: Contemporary problems in the field of production, access,
maintenance and distribution of knowledge regulated by globally harmonized
intellectual property regime have brought about tremendous economic, social,
political and institutional crisis and deadlock(s). Therefore, we need to revisit and
rethink our politics, strategies and tactics. We could perhaps find inspiration in the
world of free software production, where it seems that common effort, courage and
charming obstinacy are able to build alternative tools and infrastructures. Yet, this
model might be insufficient for the whole scope of crisis facing knowledge
production and dissemination. The aforementioned corporate appropriations of free
software such as ‘tivoizations,’ ‘walled gardens,’ ‘software-as-a-service’ etc. bring
about the problem of longevity of commons-based peer-production.
Furthermore, the sense of entitlement for building alternatives to dominant
modes of oppression can only arrive at the close proximity to capitalist centres of
power. The periphery (of capitalism), in contrast, relies on strategies of ‘stealing’
and bypassing socio-economic barriers by refusing to submit to the harmonized
regulation that sets the frame for global economic exchange. If we honestly look
back and try to compare the achievements of digital piracy vs. the achievements of
reformist Creative Commons, it is obvious that the struggle for access to
knowledge is still alive mostly because of piracy.
PJ & AK: This brings us to the struggle against (knowledge as) private
property. What are the main problems in this struggle? How do you go about them?
MM & TM: Many projects addressing the crisis of access to knowledge are
originated in Eastern Europe. Examples include Library Genesis, Science Hub,
Monoskop and Memory of the World. Balázs Bodó’s research (2016) on the ethos
of Library Genesis and Science Hub resonates with our beliefs, shared through all
abovementioned projects, that the concept of private property should not be taken
for granted. Private property can and should be permanently questioned,
challenged and negotiated. This is especially the case in the face of artificial
scarcity (such as lack of access to knowledge caused by intellectual property in
context of digital networks) or selfish speculations over scarce basic human

256

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

resources (such as problems related to housing, water or waterfront development)
(Mars, Medak, & Sekulić, 2016).
The struggle to challenge the property regime used to be at the forefront of the
Free Software Movement. In the spectacular chain of recent events, where the
revelations of sweeping control and surveillance of electronic communications
brought about new heroes (Manning, Assange, Snowden), the hacker is again
reduced to the heroic cypherpunk outlaw. This firmly lies within the old Cold War
paradigm of us (the good guys) vs. them (the bad guys). However, only rare and
talented people are able to master cryptography, follow exact security protocols,
practice counter-control, and create a leak of information. Unsurprisingly, these
people are usually white, male, well-educated, native speakers of English.
Therefore, the narrative of us vs. them is not necessarily the most empowering, and
we feel that it requires a complementary strategy that challenges the property
regime as a whole. As our letter at Custodians.online says:
We find ourselves at a decisive moment. This is the time to recognize that the
very existence of our massive knowledge commons is an act of collective
civil disobedience. It is the time to emerge from hiding and put our names
behind this act of resistance. You may feel isolated, but there are many of us.
The anger, desperation and fear of losing our library infrastructures, voiced
across the Internet, tell us that. This is the time for us custodians, being dogs,
humans or cyborgs, with our names, nicknames and pseudonyms, to raise our
voices. Share your writing – digitize a book – upload your files. Don’t let our
knowledge be crushed. Care for the libraries – care for the metadata – care
for the backup. (Custodians.online, 2015)
FROM CIVIL DISOBEDIENCE TO PUBLIC LIBRARY

PJ & AK: Started in 2012, The Public Library project (Memory of the World,
2016a) is an important part of struggle against commodification of knowledge.
What is the project about; how did it arrive into being?
MM & TM: The Public Library project develops and affirms scenarios for
massive disobedience against current regulation of production and circulation of
knowledge and culture in the digital realm. Started in 2012, it created a lot of
resonance across the peripheries of an unevenly developed world of study and
learning. Earlier that year, takedown of the book-sharing site Library.nu produced
the anxiety that the equalizing effects brought about by piracy would be rolled
back. With the takedown, the fact that access to most recent and most relevant
knowledge was (finally) no longer a privilege of the rich academic institutions in a
few countries of the Global West, and/or the exclusive preserve of the academia to
boot – has simply disappeared into thin air. Certainly, various alternatives from
deep semi-periphery have quickly filled the gap. However, it is almost a miracle
that they still continue to exist in spite of prosecution they are facing on everyday
basis.

257

CHAPTER 12

Our starting point for the Public Library project is simple: public library is the
institutional form devised by societies in order to make knowledge and culture
accessible to all its members regardless their social or economic status. There is a
political consensus across the board that this principle of access is fundamental to
the purpose of a modern society. Only educated and informed citizens are able to
claim their rights and fully participate in the polity for common good. Yet, as
digital networks have radically expanded availability of literature and science,
provision of de-commodified access to digital objects has been by and large denied
to public libraries. For instance, libraries frequently do not have the right to
purchase e-books for lending and preservations. If they do, they are limited in
regards to how many times and under what conditions they can lend digital objects
before the license and the object itself is revoked (Greenfield, 2012). The case of
academic journals is even worse. As journals become increasingly digital, libraries
can provide access and ‘preserve’ them only for as long as they pay extortionate
subscriptions. The Public Library project fills in the space that remains denied to
real-world public libraries by building tools for organizing and sharing electronic
libraries, creating digitization workflows and making books available online.
Obviously, we are not alone in this effort. There are many other platforms, public
and hidden, that help people to share books. And the practice of sharing is massive.
PJ & AK: The Public Library project (Memory of the World, 2016a) is a part of
a wider global movement based, amongst other influences, on the seminal work of
Aaron Swartz. This movement consists of various projects including but not
limited to Library Genesis, Aaaaarg.org, UbuWeb, and others. Please situate The
Public Library project in the wider context of this movement. What are its distinct
features? What are its main contributions to the movement at large?
MM & TM: The Public Library project is informed by two historic moments in
the development of institution of public library The first defining moment
happened during the French Revolution – the seizure of library collections from
aristocracy and clergy, and their transfer to the Bibliothèque Nationale and
municipal libraries of the post-revolutionary Republic. The second defining
moment happened in England through working class struggles to make knowledge
accessible to the working class. After the revolution of 1848, that struggle resulted
in tax-supported public libraries. This was an important part of the larger attempt
by the Chartist movement to provide workers with “really useful knowledge”
aimed at raising class consciousness through explaining functioning of capitalist
domination and exploring ways of building workers’ own autonomous culture
(Johnson, 1988). These defining revolutionary moments have instituted two
principles underpinning the functioning of public libraries: a) general access to
knowledge is fundamental to full participation in the society, and b)
commodification of knowledge in the form of book trade needs to be limited by
public de-commodified non-monetary forms of access through public institutions.
In spite of enormous expansion of potentials for providing access to knowledge
to all regardless of their social status or geographic location brought about by the
digital technologies, public libraries have been radically limited in pursuing their
mission. This results in side-lining of public libraries in enormous expansion of
258

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

commodification of knowledge in the digital realm, and brings huge profits to
academic publishers. In response to these limitations, a number of projects have
sprung up in order to maintain public interest by illegal means.
PJ & AK: Can you provide a short genealogy of these projects?
MM & TM: Founded in 1996, Ubu was one of the first online repositories.
Then, in 2001, Textz.com started distributing texts in critical theory. After
Textz.com got shot down in early 2004, it took another year for Aaaaarg to emerge
and Monoskop followed soon thereafter. In the latter part of the 2000s, Gigapedia
started a different trajectory of providing access to comprehensive repositories.
Gigapedia was a game changer, because it provided access to thousands and
thousands of scholarly titles and made access to that large corpus no longer limited
to those working or studying in the rich institutions of the Global North. In 2012
publishing industry shut down Gigapedia (at the time, it was known as Library.nu).
Fortunately, the resulting vacuum did not last for long, as Library.nu repository got
merged into the holdings of Library Genesis. Building on the legacy of Soviet
scholars who devised the ways of shadow production and distribution of
knowledge in the form of samizdat and early digital distribution of texts in the
post-Soviet period (Balázs, 2014), Library Genesis has built a robust infrastructure
with the mission to provide access to the largest online library in existence while
keeping a low profile. At this moment Library Genesis provides access to books,
and its sister project Science Hub provides access to academic journals. Both
projects are under threat of closure by the largest academic publisher Reed
Elsevier. Together with the Public Library project, they articulate a position of civil
disobedience.
PJ & AK: Please elaborate the position of civil disobedience. How does it
work; when is it justified?
MM & TM: Legitimating discourses usually claim that shadow libraries fall
into the category of non-commercial fair use. These arguments are definitely valid,
yet they do not build a particularly strong ground for defending knowledge
commons. Once they arrive under attack, therefore, shadow libraries are typically
shut down. In our call for collective disobedience, therefore, we want to make a
larger claim. Access to knowledge as a universal condition could not exist if we –
academics and non-academics across the unevenly developed world – did not
create own ways of commoning knowledge that we partake in producing and
learning. By introducing the figure of the custodian, we are turning the notion of
property upside down. Paraphrasing the Little Prince, to own something is to be
useful to that which you own (Saint-Exupéry, 1945). Custodians are the political
subjectivity of that disobedient work of care.
Practices of sharing, downloading, and uploading, are massive. So, if we want to
prevent our knowledge commons from being taken away over and over again, we
need to publicly and collectively stand behind our disobedient behaviour. We
should not fall into the trap of the debate about legality or illegality of our
practices. Instead, we should acknowledge that our practices, which have been
deemed illegal, are politically legitimate in the face of uneven opportunities
between the Global North and the Global South, in the face of commercialization
259

CHAPTER 12

of education and student debt in the Global North … This is the meaning of civil
disobedience – to take responsibility for breaking unjust laws.
PJ & AK: We understand your lack of interest for debating legality –
nevertheless, legal services are very interested in your work … For instance,
Marcell has recently been involved in a law suit related to Aaaaarg. Please describe
the relationship between morality and legality in your (public) engagement. When,
and under which circumstances, can one’s moral actions justify breaking the law?
MM & TM: Marcell has been recently drawn into a lawsuit that was filed
against Aaaaarg for copyright infringement. Marcell, the founder of Aaaaarg Sean
Dockray, and a number of institutions ranging from universities to continentalscale intergovernmental organizations, are being sued by a small publisher from
Quebec whose translation of André Bazin’s What is Cinema? (1967) was twice
scanned and uploaded to Aaaaarg by an unknown user. The book was removed
each time the plaintiff issued a takedown notice, resulting in minimal damages, but
these people are nonetheless being sued for 500.000 Canadian dollars. Should
Aaaaarg not be able to defend its existence on the principle of fair use, a valuable
common resource will yet again be lost and its founder will pay a high price. In this
lawsuit, ironically, there is little economic interest. But many smaller publishers
find themselves squeezed between the privatization of education which leaves
students and adjuncts with little money for books and the rapid concentration of
academic publishing. For instance, Taylor and Francis has acquired a smaller
humanities publisher Ashgate and shut it down in a matter of months (Save
Ashgate Publishing petition, 2015).
The system of academic publishing is patently broken. It syphons off public
funding of science and education into huge private profits, while denying living
wages and access to knowledge to its producers. This business model is legal, but
deeply illegitimate. Many scientists and even governments agree with this
conclusion – yet, situation cannot be easily changed because of entrenched power
passed down from the old models of publishing and their imbrication with
allocation of academic prestige. Therefore, the continuous existence of this model
commands civil disobedience.
PJ & AK: The Public Library project (Memory of the World, 2016a) operates
in various public domains including art galleries. Why did you decide to develop
The Public Library project in the context of arts? How do you conceive the
relationship between arts and activism?
MM & TM: We tend to easily conflate the political with the aesthetic.
Moreover, when an artwork expressedly claims political character, this seems to
grant it recognition and appraisal. Yet, socially reflective character of an artwork
and its consciously critical position toward the social reality might not be outright
political. Political action remains a separate form of agency, which is different than
that of socially reflexive, situated and critical art. It operates along a different logic
of engagement. It requires collective mobilization and social transformation.
Having said that, socially reflexive, situated and critical art cannot remain detached
from the present conjuncture and cannot exist outside the political space. Within
the world of arts, alternatives to existing social sensibilities and realities can be
260

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

articulated and tested without paying a lot of attention to consistency and
plausibility. Whereas activism generally leaves less room for unrestricted
articulation, because it needs to produce real and plausible effects.
With the generous support of the curatorial collective What, How and for Whom
(WHW) (2016), the Public Library project was surprisingly welcomed by the art
world, and this provided us with a stage to build the project, sharpen its arguments
and ascertain legitimacy of its political demands. The project was exhibited, with
WHW and other curators, in some of the foremost art venues such as Reina Sofía
in Madrid, Württembergischer Kunstverein in Stuttgart, 98 Weeks in Beirut,
Museum of Contemporary Art Metelkova in Ljubljana, and Calvert 22 in London.
It is great to have a stage where we can articulate social issues and pursue avenues
of action that other social institutions might find risky to support. Yet, while the
space of art provides a safe haven from the adversarial world of political reality, we
think that the addressed issues need to be politicized and that other institutions,
primarily institutions of education, need to stand behind the demand for universal
access. For instance, teaching and research at the University in Zagreb critically
depends on the capacity of its faculty and students to access books and journals
from sources that are deemed illegal – in our opinion, therefore, the University
needs to take a public stand for these forms of access. In the world of
commercialized education and infringement liability, expecting the University to
publicly support us seems highly improbable. However, it is not impossible! This
was recently demonstrated by the Zürich Academy of Arts, which now hosts a
mirror of Ubu – a crucial resource for its students and faculty alike
(Custodians.online, 2016).
PJ & AK: In the current climate of economic austerity, the question of
resources has become increasingly important. For instance, Web 2.0. has narrowed
available spaces for traditional investigative journalism, and platforms such as
Airbnb and Uber have narrowed spaces for traditional labor. Following the same
line of argument, placing activism into art galleries clearly narrows available
spaces for artists. How do you go about this problem? What, if anything, should be
done with the activist takeover of traditional forms of art? Why?
MM & TM: Art can no longer stand outside of the political space, and it can no
longer be safely stowed away into a niche of supposed autonomy within bourgeois
public sphere detached from commodity production and the state. However, art
academies in Croatia and many other places throughout the world still churn out
artists on the premise that art is apolitical. In this view artists can specialize in a
medium and create in isolation of their studios – if their artwork is recognized as
masterful, it will be bought on the marketplace. This is patently a lie! Art in Croatia
depends on bonds of solidarity and public support.
Frequently it is the art that seeks political forms of engagement rather than vice
versa. A lot of headspace for developing a different social imaginary can be gained
from that venturing aspect of contemporary art. Having said that, art does not need
to be political in order to be relevant and strong.

261

CHAPTER 12

THE DOUBLE LIFE OF HACKER CULTURE

PJ & AK: The Public Library project (Memory of the World, 2016a) is essentially
pedagogical. When everyone is a librarian, and all books are free, living in the
world transforms into living with the world – so The Public Library project is also
essentially anti-capitalist. This brings us to the intersections between critical
pedagogy of Paulo Freire, Peter McLaren, Henry Giroux, and others – and the
hacker culture of Richard Stallman, Linus Torvalds, Steven Lévy, and others. In
spite of various similarities, however, critical pedagogy and hacker culture disagree
on some important points.
With its deep roots in Marxism, critical theory always insists on class analysis.
Yet, imbued in the Californian ideology (Barbrook and Cameron, 1996), the hacker
culture is predominantly individualist. How do you go about the tension between
individualism and collectivism in The Public Library project? How do you balance
these forces in your overall work?
MM & TM: Hacker culture has always lived a double life. Personal computers
and the Internet have set up a perfect projection screen for a mind-set which
understands autonomy as a pursuit for personal self-realisation. Such mind-set sees
technology as a frontier of limitless and unconditional freedom, and easily melds
with entrepreneurial culture of the Silicon Valley. Therefore, it is hardly a surprise
that individualism has become the hegemonic narrative of hacker culture.
However, not all hacker culture is individualist and libertarian. Since the 1990s, the
hacker culture is heavily divided between radical individualism and radical
mutualism. Fred Turner (2006), Richard Barbrook and Andy Cameron (1996) have
famously shown that radical individualism was built on freewheeling counterculture of the American hippie movement, while radical mutualism was built on
collective leftist traditions of anarchism and Marxism. This is evident in the Free
Software Movement, which has placed ethics and politics before economy and
technology. In her superb ethnographic work, Biella Coleman (2013) has shown
that projects such as GNU/Linux distribution Debian have espoused radically
collective subjectivities. In that regard, these projects stand closer to mutualist,
anarchist and communist traditions where collective autonomy is the foundation of
individual freedom.
Our work stands in that lineage. Therefore, we invoke two collective figures –
amateur librarian and custodian. These figures highlight the labor of communizing
knowledge and maintaining infrastructures of access, refuse to leave the commons
to the authority of professions, and create openings where technologies and
infrastructures can be re-claimed for radically collective and redistributive
endeavours. In that context, we are critical of recent attempts to narrow hacker
culture down to issues of surveillance, privacy and cryptography. While these
issues are clearly important, they (again) reframe the hacker community through
the individualist dichotomy of freedom and privacy, and, more broadly, through
the hegemonic discourse of the post-historical age of liberal capitalism. In this
way, the essential building blocks of the hacker culture – relations of production,
relations of property, and issues of redistribution – are being drowned out, and
262

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

collective and massive endeavour of commonizing is being eclipsed by the
capacity of the few crypto-savvy tricksters to avoid government control.
Obviously, we strongly disagree with the individualist, privative and 1337 (elite)
thrust of these developments.
PJ & AK: The Public Library project (Memory of the World, 2016a) arrives
very close to visions of deschooling offered by authors such as Ivan Illich (1971),
Everett Reimer (1971), Paul Goodman (1973), and John Holt (1967). Recent
research indicates that digital technologies offer some fresh opportunities for the
project of deschooling (Hart, 2001; Jandrić, 2014, 2015b), and projects such as
Monoskop (Monoskop, 2016) and The Public Library project (Memory of the
World, 2016a) provide important stepping-stones for emancipation of the
oppressed. Yet, such forms of knowledge and education are hardly – if at all –
recognised by the mainstream. How do you go about this problem? Should these
projects try and align with the mainstream, or act as subversions of the mainstream,
or both? Why?
MM & TM: We are currently developing a more fine-tuned approach to
educational aspects of amateur librarianship. The forms of custodianship over
knowledge commons that underpin the practices behind Monoskop, Public Library,
Aaaaarg, Ubu, Library Genesis, and Science Hub are part and parcel of our
contemporary world – whether you are a non-academic with no access to scholarly
libraries, or student/faculty outside of the few well-endowed academic institutions
in the Global North. As much as commercialization and privatization of education
are becoming mainstream across the world, so are the strategies of reproducing
one’s knowledge and academic research that depend on the de-commodified access
of shadow libraries.
Academic research papers are narrower in scope than textbooks, and Monoskop
is thematically more specific than Library Genesis. However, all these practices
exhibit ways in which our epistemologies and pedagogies are built around
institutional structures that reproduce inequality and differentiated access based on
race, gender, class and geography. By building own knowledge infrastructures, we
build different bodies of knowledge and different forms of relating to our realities –
in words of Walter Mignolo, we create new forms of epistemic disobedience
(2009). Through Public Library, we have digitized and made available several
collections that represent epistemologically different corpuses of knowledge. A
good example of that is the digital collection of books selected by Black Panther
Herman Wallace as his dream library for political education (Memory of the
World, 2016b).
PJ & AK: Your work breaks traditional distinctions between professionals and
amateurs – when everyone becomes a librarian, the concepts of ‘professional
librarian’ and ‘amateur librarian’ become obsolete. Arguably, this tension is an
inherent feature of the digital world – similar trends can be found in various
occupations such as journalism and arts. What are the main consequences of the
new (power) dynamics between professionals and amateurs?
MM & TM: There are many tensions between amateurs and professionals.
There is the general tension, which you refer to as “the inherent feature of the
263

CHAPTER 12

digital world,” but there are also more historically specific tensions. We, amateur
librarians, are mostly interested in seizing various opportunities to politicize and
renegotiate the positions of control and empowerment in the tensions that are
already there. We found that storytelling is a particularly useful, efficient and
engaging way of politicization. The naïve and oft overused claim – particularly
during the Californian nineties – of the revolutionary potential of emerging digital
networks turned out to be a good candidate for replacement by a story dating back
two centuries earlier – the story of emergence of public libraries in the early days
of the French bourgeois revolution in the 19th century.
The seizure of book collections from the Church and the aristocracy in the
course of revolutions casts an interesting light on the tensions between the
professionals and the amateurs. Namely, the seizure of book collections didn’t lead
to an Enlightenment in the understanding of the world – a change in the paradigm
how we humans learn, write and teach each other about the world. Steam engine,
steam-powered rotary press, railroads, electricity and other revolutionary
technological innovations were not seen as results of scientific inquiry. Instead,
they were by and large understood as developments in disciplines such as
mechanics, engineering and practical crafts, which did not challenge religion as the
foundational knowledge about the world.
Consequently, public prayers continued to act as “hoped for solutions to cattle
plagues in 1865, a cholera epidemic in 1866, and a case of typhoid suffered by the
young Prince (Edward) of Wales in 1871” (Gieryn, 1983). Scientists of the time
had to demarcate science from both the religion and the mechanics to provide a
rationale for its supriority as opposed to the domains of spiritual and technical
discovery. Depending on whom they talked to, asserts Thomas F. Gieryn, scientists
would choose to discribe the science as either theoretical or empirical, pure or
applied, often in contradictory ways, but with a clear goal to legitimate to
authorities both the scientific endavor and its claim to resources. Boundary-work of
demarcation had the following characteristics:
(a) when the goal is expansion of authority or expertise into domains claimed
by other professions or occupations, boundary-work heightens the contrast
between rivals in ways flattering to the ideologists’ side;
(b) when the goal is monopolization of professional authority and resources,
boundary-work excludes rivals from within by defining them as outsiders
with labels such as ‘pseudo,’ ‘deviant,’ or ‘amateur’;
(c) when the goal is protection of autonomy over professional activities,
boundary-work exempts members from responsibility for consequences of
their work by putting the blame on scapegoats from outside. (Gieryn, 1983:
791–192)
Once institutionally established, modern science and its academic system have
become the exclusive instances where emerging disciplines had now to seek
recognition and acceptance. The new disciplines (and their respective professions),
in order to become acknowledged by the scientific community as legitimate, had to
264

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

repeat the same boundary-work as the science in general once had to go through
before.
The moral of this story is that the best way for a new scientific discipline to
claim its territory was to articulate the specificity and importance of its insights in a
domain no other discipline claimed. It could achieve that by theorizing,
formalizing, and writing own vocabulary, methods and curricula, and finally by
asking the society to see its own benefit in acknowledging the discipline, its
practitioners and its practices as a separate profession – giving it the green light to
create its own departments and eventually join the productive forces of the world.
This is how democratization of knowledge led to the professionalization of science.
Another frequent reference in our storytelling is the history of
professionalization of computing and its consequences for the fields and disciplines
where the work of computer programmers plays an important role (Ensmenger,
2010: 14; Krajewski, 2011). Markus Krajewski in his great book Paper Machines
(2011), looking back on the history of index card catalog (an analysis that is
formative for our understanding of the significance of library catalog as an
epistemic tool), introduced a thought-provoking idea of the logical equivalence of
the developed index card catalog and the Turing machine, thus making the library a
vanguard of the computing. Granting that equivalence, we however think that the
professionalization of computing much better explains the challenges of today’s
librarianship and tensions between the amateur and professional librarians.
The world recognized the importance and potential of computer technology
much before computer science won its own autonomy in the academia. Computer
science first had to struggle and go through its own historical phase of boundarywork. In 1965 the Association for Computing Machinery (ACM) had decided to
pool together various attempts to define the terms and foundations of computer
science analysis. Still, the field wasn’t given its definition before Donald Knuth
and his colleagues established the algorithm as as the principle unit of analysis in
computer science in the first volume of Knuth’s canonical The Art of Computer
Programming (2011) [1968]. Only once the algorithm was posited as the main unit
of study of computer science, which also served as the basis for ACM’s
‘Curriculum ‘68’ (Atchison et al., 1968), the path was properly paved for the future
departments of computer science in the university.
PJ & AK: What are the main consequences of these stories for computer
science education?
MM & TM: Not everyone was happy with the algorithm’s central position in
computer science. Furthermore, since the early days, computer industry has been
complaining that the university does not provide students with practical
knowledge. Back in 1968, for instance, IBM researcher Hal Sackman said:
new departments of computer science in the universities are too busy
teaching simon-pure courses in their struggle for academic recognition to pay
serious time and attention to the applied work necessary to educate
programmers and systems analysts for the real world. (in Ensmenger, 2010:
133)
265

CHAPTER 12

Computer world remains a weird hybrid where knowledge is produced in both
academic and non-academic settings, through academic curricula – but also
through fairs, informal gatherings, homebrew computer clubs, hacker communities
and the like. Without the enthusiasm and the experiments with ways how
knowledge can be transferred and circulated between peers, we would have
probably never arrived to the Personal Computer Revolution in the beginning of
1980s. Without the amount of personal computers already in use, we would have
probably never experienced the Internet revolution in the beginning of 1990s. It is
through such historical development that computer science became the academic
centre of the larger computer universe which spread its tentacles into almost all
other known disciplines and professions.
PJ & AK: These stories describe the process of professionalization. How do
you go about its mirror image – the process of amateurisation?
MM & TM: Systematization, vocabulary, manuals, tutorials, curricula – all the
processes necessary for achieving academic autonomy and importance in the world
– prime a discipline for automatization of its various skills and workflows into
software tools. That happened to photography (Photoshop, 1990; Instagram, 2010),
architecture (AutoCAD, 1982), journalism (Blogger, 1999; WordPress, 2003),
graphic design (Adobe Illustrator, 1986; Pagemaker, 1987; Photoshop, 1988;
Freehand, 1988), music production (Steinberg Cubase, 1989), and various other
disciplines (Memory of the World, 2016b).
Usually, after such software tool gets developed and introduced into the
discipline, begins the period during which a number of amateurs start to ‘join’ that
profession. An army of enthusiasts with a specific skill, many self-trained and with
understanding of a wide range of software tools, join. This phenomenon often
marks a crisis as amateurs coming from different professional backgrounds start to
compete with certified and educated professionals in that field. Still, the future
development of the same software tools remains under control by software
engineers, who become experts in established workflows, and who promise further
optimizations in the field. This crisis of old professions becomes even more
pronounced if the old business models – and their corporate monopolies – are
challenged by the transition to digital network economy and possibly face the
algorithmic replacement of their workforce and assets.
For professions under these challenging conditions, today it is often too late for
boundary-work described in our earlier answer. Instead of maintaining authority
and expertise by labelling upcoming enthusiasts as ‘pseudo,’ ‘deviant,’ or
‘amateur,’ therefore, contemporary disciplines need to revisit own roots, values,
vision and benefits for society and then (re-)articulate the corpus of knowledge that
the discipline should maintain for the future.
PJ & AK: How does this relate to the dichotomy between amateur and
professional librarians?
MM & TM: We regard the e-book management software Calibre (2016),
written by Kovid Goyal, as a software tool which has benefited from the
knowledge produced, passed on and accumulated by librarians for centuries.
Calibre has made the task of creating and maintaining the catalog easy.
266

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

Our vision is to make sharing, aggregating and accessing catalogs easy and
playful. We like the idea that every rendered catalog is stored on a local hard disk,
that an amateur librarian can choose when to share, and that when she decides to
share, the catalog gets aggregated into a library together with the collections of
other fellow amateur librarians (at https://library.memoryoftheworld.org). For the
purpose of sharing we wrote the Calibre plugin named let’s share books and set up
the related server infrastructure – both of which are easily replicable and
deployable into distributed clones.
Together with Voja Antonić, the legendary inventor of the first eight-bit
computer in Yugoslavia, we also designed and developed a series of book scanners
and used them to digitize hundreds of books focused to Yugoslav humanities such
as the Digital Archive of Praxis and the Korčula Summer School (2016), Catalogue
of Liberated Books (2013), books thrown away from Croatian public libraries
during ideological cleansing of the 1990s Written-off (2015), and the collection of
books selected by the Black Panther Herman Wallace as his dream library for
political education (Memory of the World, 2016b).
In our view, amateur librarians are complementary to professional librarians,
and there is so much to learn and share between each other. Amateur librarians care
about books which are not (yet) digitally curated with curiosity, passion and love;
they dare to disobey in pursuit for the emancipatory vision of the world which is
now under threat. If we, amateur librarians, ever succeed in our pursuits – that
should secure the existing jobs of professional librarians and open up many new
and exciting positions. When knowledge is easily accessed, (re)produced and
shared, there will be so much to follow up upon.
TOWARDS AN ACTIVIST PUBLIC PEDAGOGY

PJ & AK: You organize talks and workshops, publish books, and maintain a major
regional hub for people interested in digital cultures. In Croatia, your names are
almost synonymous with social studies of the digital – worldwide, you are
recognized as regional leaders in the field. Such engagement has a prominent
pedagogical component – arguably, the majority of your work can be interpreted as
public pedagogy. What are the main theoretical underpinnings of your public
pedagogy? How does it work in practice?
MM & TM: Our organization is a cluster of heterogeneous communities and
fields of interest. Therefore, our approaches to public pedagogy hugely vary. In
principle, we subscribe to the idea that all intelligences are equal and that all
epistemology is socially structured. In practice, this means that our activities are
syncretic and inclusive. They run in parallel without falling under the same
umbrella, and they bring together people of varying levels of skill – who bring in
various types of knowledge, and who arrive from various social backgrounds.
Working with hackers, we favour hands-on approach. For a number of years
Marcell has organized weekly Skill Sharing program (Net.culture club MaMa,
2016b) that has started from very basic skills. The bar was incrementally raised to
today’s level of the highly specialized meritocratic community of 1337 hackers. As
267

CHAPTER 12

the required skill level got too demanding, some original members left the group –
yet, the community continues to accommodate geeks and freaks. At the other end,
we maintain a theoretically inflected program of talks, lectures and publications.
Here we invite a mix of upcoming theorists and thinkers and some of the most
prominent intellectuals of today such as Jacques Rancière, Alain Badiou, Saskia
Sassen and Robert McChesney. This program creates a larger intellectual context,
and also provides space for our collaborators in various activities.
Our political activism, however, takes an altogether different approach. More
often than not, our campaigns are based on inclusive planning and direct decision
making processes with broad activist groups and the public. However, such
inclusiveness is usually made possible by a campaigning process that allows
articulation of certain ideas in public and popular mobilization. For instance, before
the Right to the City campaign against privatisation of the pedestrian zone in
Zagreb’s Varšavska Street coalesced together (Pravo na grad, 2016), we tactically
used media for more than a year to clarify underlying issues of urban development
and mobilize broad public support. At its peak, this campaign involved no less than
200 activists involved in the direct decision-making process and thousands of
citizens in the streets. Its prerequisite was hard day-to-day work by a small group
of people organized by the important member of our collective Teodor Celakoski.
PJ & AK: Your public pedagogy provides great opportunity for personal
development – for instance, talks organized by the Multimedia Institute have been
instrumental in shaping our educational trajectories. Yet, you often tackle complex
problems and theories, which are often described using complex concepts and
language. Consequently, your public pedagogy is inevitably restricted to those who
already possess considerable educational background. How do you balance the
popular and the elitist aspects of your public pedagogy? Do you intend to try and
reach wider audiences? If so, how would you go about that?
MM & TM: Our cultural work equally consists of more demanding and more
popular activities, which mostly work together in synergy. Our popular Human
Rights Film Festival (2016) reaches thousands of people; yet, its highly selective
programme echoes our (more) theoretical concerns. Our political campaigns are
intended at scalability, too. Demanding and popular activities do not contradict
each other. However, they do require very different approaches and depend on
different contexts and situations. In our experience, a wide public response to a
social cause cannot be simply produced by shaping messages or promoting causes
in ways that are considered popular. The response of the public primarily depends
on a broadly shared understanding, no matter its complexity, that a certain course
of action has an actual capacity to transform a specific situation. Recognizing that
moment, and acting tactfully upon it, is fundamental to building a broad political
process.
This can be illustrated by the aforementioned Custodians.online letter (2015)
that we recently co-authored with a number of our fellow library activists against
the injunction that allows Elsevier to shut down two most important repositories
providing access to scholarly writing: Science Hub and Library Genesis. The letter
is clearly a product of our specific collective work and dynamic. Yet, it clearly
268

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

articulates various aspects of discontent around this impasse in access to
knowledge, so it resonates with a huge number of people around the world and
gives them a clear indication that there are many who disobey the global
distribution of knowledge imposed by the likes of Elsevier.
PJ & AK: Your work is probably best described by John Holloway’s phrase
“in, against, and beyond the state” (Holloway, 2002, 2016). What are the main
challenges of working under such conditions? How do you go about them?
MM & TM: We could situate the Public Library project within the structure of
tactical agency, where one famously moves into the territory of institutional power
of others. While contesting the regulatory power of intellectual property over
access to knowledge, we thus resort to appropriation of universalist missions of
different social institutions – public libraries, UNESCO, museums. Operating in an
economic system premised on unequal distribution of means, they cannot but fail
to deliver on their universalist promise. Thus, while public libraries have a mission
to provide access to knowledge to all members of the society, they are severely
limited in what they can do to accomplish that mission in the digital realm. By
claiming the mission of universal access to knowledge for shadow libraries,
collectively built shared infrastructures redress the current state of affairs outside of
the territory of institutions. Insofar, these acts of commoning can indeed be
regarded as positioned beyond the state (Holloway, 2002, 2016).
Yet, while shadow libraries can complement public libraries, they cannot
replace public libraries. And this shifts the perspective from ‘beyond’ to ‘in and
against’: we all inhabit social institutions which reflect uneven development in and
between societies. Therefore, we cannot simply operate within binaries: powerful
vs. powerless, institutional vs. tactical. Our space of agency is much more complex
and blurry. Institutions and their employees resist imposed limitations, and
understand that their spaces of agency reach beyond institutional limitations.
Accordingly, the Public Library project enjoys strong and unequivocal complicity
of art institutions, schools and libraries for its causes and activities. While
collectively building practices that abolish the present state of affairs and reclaim
the dream of universal access to knowledge, we rearticulate the vision of a
radically equal society equipped with institutions that can do justice to that
“infinite demand” (Critchley, 2013). We are collectively pursuing this collective
dream – in words of our friend and our continuing inspiration Aaron Swartz: “With
enough of us, around the world, we’ll not just send a strong message opposing the
privatization of knowledge – we’ll make it a thing of the past. Will you join us?”
(Swartz, 2008).



tactics in Medak, Mars & WHW 2015


Medak, Mars & WHW
Public Library
2015


Public Library

may • 2015
price 50 kn

This publication is realized along with the exhibition
Public Library • 27/5 –13/06 2015 • Gallery Nova • Zagreb
Izdavači / Publishers
Editors
Tomislav Medak • Marcell Mars •
What, How & for Whom / WHW
ISBN 978-953-55951-3-7 [Što, kako i za koga/WHW]
ISBN 978-953-7372-27-9 [Multimedijalni institut]
A Cip catalog record for this book is available from the
National and University Library in Zagreb under 000907085

With the support of the Creative Europe Programme of the
European Union

ZAGREB • ¶ May • 2015

Public Library

1.
Marcell Mars, Manar Zarroug
& Tomislav Medak

75

Public Library (essay)
2.
Paul Otlet

87

Transformations in the Bibliographical
Apparatus of the Sciences
(Repertory — Classification — Office
of Documentation)
3.
McKenzie Wark

111

Metadata Punk
4.
Tomislav Medak
The Future After the Library
UbuWeb and Monoskop’s Radical Gestures

121

Marcell Mars,
Manar Zarroug
& Tomislav Medak

Public library (essay)

In What Was Revolutionary about the French Revolution? 01 Robert Darnton considers how a complete collapse of the social order (when absolutely
everything — all social values — is turned upside
down) would look. Such trauma happens often in
the life of individuals but only rarely on the level
of an entire society.
In 1789 the French had to confront the collapse of
a whole social order—the world that they defined
retrospectively as the Ancien Régime — and to find
some new order in the chaos surrounding them.
They experienced reality as something that could
be destroyed and reconstructed, and they faced
seemingly limitless possibilities, both for good and
evil, for raising a utopia and for falling back into
tyranny.02
The revolution bootstraps itself.
01 Robert H. Darnton, What Was Revolutionary about the
French Revolution? (Waco, TX: Baylor University Press,
1996), 6.
02 Ibid.

Public library (essay)

75

In the dictionaries of the time, the word revolution was said to derive from the verb to revolve and
was defined as “the return of the planet or a star to
the same point from which it parted.” 03 French political vocabulary spread no further than the narrow
circle of the feudal elite in Versailles. The citizens,
revolutionaries, had to invent new words, concepts
… an entire new language in order to describe the
revolution that had taken place.
They began with the vocabulary of time and space.
In the French revolutionary calendar used from 1793
until 1805, time started on 1 Vendémiaire, Year 1, a
date which marked the abolition of the old monarchy on (the Gregorian equivalent) 22 September
1792. With a decree in 1795, the metric system was
adopted. As with the adoption of the new calendar,
this was an attempt to organize space in a rational
and natural way. Gram became a unit of mass.
In Paris, 1,400 streets were given new names.
Every reminder of the tyranny of the monarchy
was erased. The revolutionaries even changed their
names and surnames. Le Roy or Leveque, commonly
used until then, were changed to Le Loi or Liberté.
To address someone, out of respect, with vous was
forbidden by a resolution passed on 24 Brumaire,
Year 2. Vous was replaced with tu. People are equal.
The watchwords Liberté, égalité, fraternité (freedom, equality, brotherhood)04 were built through
03 Ibid.
04 Slogan of the French Republic, France.fr, n.d.,
http://www.france.fr/en/institutions-and-values/slogan
-french-republic.html.

76

M. Mars • M. Zarroug • T. Medak

literacy, new epistemologies, classifications, declarations, standards, reason, and rationality. What first
comes to mind about the revolution will never again
be the return of a planet or a star to the same point
from which it departed. Revolution bootstrapped,
revolved, and hermeneutically circularized itself.
Melvil Dewey was born in the state of New York in
1851.05 His thirst for knowledge was found its satisfaction in libraries. His knowledge about how to
gain knowledge was developed by studying libraries.
Grouping books on library shelves according to the
color of the covers, the size and thickness of the spine,
or by title or author’s name did not satisfy Dewey’s
intention to develop appropriate new epistemologies in the service of the production of knowledge
about knowledge. At the age of twenty-four, he had
already published the first of nineteen editions of
A Classification and Subject Index for Cataloguing
and Arranging the Books and Pamphlets of a Library,06 the classification system that still bears its
author’s name: the Dewey Decimal System. Dewey
had a dream: for his twenty-first birthday he had
announced, “My World Work [will be] Free Schools
and Free Libraries for every soul.”07
05 Richard F. Snow, “Melvil Dewey”, American Heritage 32,
no. 1 (December 1980),
http://www.americanheritage.com/content/melvil-dewey.
06 Melvil Dewey, A Classification and Subject Index for Cataloguing and Arranging the Books and Pamphlets of a
Library (1876), Project Gutenberg e-book 12513 (2004),
http://www.gutenberg.org/files/12513/12513-h/12513-h.htm.
07 Snow, “Melvil Dewey”.

Public library (essay)

77

His dream came true. Public Library is an entry
in the catalog of History where a fantastic decimal08
describes a category of phenomenon that—together
with free public education, a free public healthcare,
the scientific method, the Universal Declaration of
Human Rights, Wikipedia, and free software, among
others—we, the people, are most proud of.
The public library is a part of these invisible infrastructures that we start to notice only once they
begin to disappear. A utopian dream—about the
place from which every human being will have access to every piece of available knowledge that can
be collected—looked impossible for a long time,
until the egalitarian impetus of social revolutions,
the Enlightment idea of universality of knowledge,
and the expcetional suspenssion of the comercial
barriers to access to knowledge made it possible.
The internet has, as in many other situations, completely changed our expectations and imagination
about what is possible. The dream of a catalogue
of the world — a universal approach to all available
knowledge for every member of society — became
realizable. A question merely of the meeting of
curves on a graph: the point at which the line of
global distribution of personal computers meets
that of the critical mass of people with access to
the internet. Today nobody lacks the imagination
necessary to see public libraries as part of a global infrastructure of universal access to knowledge
for literally every member of society. However, the
08 “Dewey Decimal Classification: 001.”, Dewey.info, 27 October 2014, http://dewey.info/class/001/2009-08/about.en.

78

M. Mars • M. Zarroug • T. Medak

emergence and development of the internet is taking place precisely at the point at which an institutional crisis—one with traumatic and inconceivable
consequences—has also begun.
The internet is a new challenge, creating experiences commonly proferred as ‘revolutionary’. Yet, a
true revolution of the internet is the universal access
to all knowledge that it makes possible. However,
unlike the new epistemologies developed during
the French revolution the tendency is to keep the
‘old regime’ (of intellectual property rights, market
concentration and control of access). The new possibilities for classification, development of languages,
invention of epistemologies which the internet poses,
and which might launch off into new orbits from
existing classification systems, are being suppressed.
In fact, the reactionary forces of the ‘old regime’
are staging a ‘Thermidor’ to suppress the public libraries from pursuing their mission. Today public
libraries cannot acquire, cannot even buy digital
books from the world’s largest publishers.09 The
small amount of e-books that they were able to acquire already they must destroy after only twenty-six
lendings.10 Libraries and the principle of universal
09 “American Library Association Open Letter to Publishers on
E-Book Library Lending”, Digital Book World, 24 September
2012, http://www.digitalbookworld.com/2012/americanlibrary-association-open-letter-to-publishers-on-e-booklibrary-lending/.
10 Jeremy Greenfield, “What Is Going On with Library E-Book
Lending?”, Forbes, 22 June 2012, http://www.forbes.com/
sites/jeremygreenfield/2012/06/22/what-is-going-on-withlibrary-e-book-lending/.

Public library (essay)

79

access to all existing knowledge that they embody
are losing, in every possible way, the battle with a
market dominated by new players such as Amazon.
com, Google, and Apple.
In 2012, Canada’s Conservative Party–led government cut financial support for Libraries and
Archives Canada (LAC) by Can$9.6 million, which
resulted in the loss of 400 archivist and librarian
jobs, the shutting down of some of LAC’s internet
pages, and the cancellation of the further purchase
of new books.11 In only three years, from 2010 to
2012, some 10 percent of public libraries were closed
in Great Britain.12
The commodification of knowledge, education,
and schooling (which are the consequences of a
globally harmonized, restrictive legal regime for intellectual property) with neoliberal austerity politics
curtails the possibilities of adapting to new sociotechnological conditions, let alone further development, innovation, or even basic maintenance of
public libraries’ infrastructure.
Public libraries are an endangered institution,
doomed to extinction.
Petit bourgeois denial prevents society from confronting this disturbing insight. As in many other
fields, the only way out offered is innovative mar11 Aideen Doran, “Free Libraries for Every Soul: Dreaming
of the Online Library”, The Bear, March 2014, http://www.
thebear-review.com/#!free-libraries-for-every-soul/c153g.
12 Alison Flood, “UK Lost More than 200 Libraries in 2012”,
The Guardian, 10 December 2012, http://www.theguardian.
com/books/2012/dec/10/uk-lost-200-libraries-2012.

80

M. Mars • M. Zarroug • T. Medak

ket-based entrepreneurship. Some have even suggested that the public library should become an
open software platform on top of which creative
developers can build app stores13 or Internet cafés
for the poorest, ensuring that they are only a click
away from the Amazon.com catalog or the Google
search bar. But these proposals overlook, perhaps
deliberately, the fundamental principles of access
upon which the idea of the public library was built.
Those who are well-meaning, intelligent, and
tactfull will try to remind the public of all the many
sides of the phenomenon that the public library is:
major community center, service for the vulnerable,
center of literacy, informal and lifelong learning; a
place where hobbyists, enthusiasts, old and young
meet and share knowledge and skills.14 Fascinating. Unfortunately, for purely tactical reasons, this
reminder to the public does not always contain an
explanation of how these varied effects arise out of
the foundational idea of a public library: universal
access to knowledge for each member of the society produces knowledge, produces knowledge about
knowledge, produces knowledge about knowledge
transfer: the public library produces sociability.
The public library does not need the sort of creative crisis management that wants to propose what
13 David Weinberger, “Library as Platform”, Library Journal,
4 September 2012, http://lj.libraryjournal.com/2012/09/
future-of-libraries/by-david-weinberger/.
14 Shannon Mattern, “Library as Infrastructure”, Design
Observer, 9 June 2014, http://places.designobserver.com/
entryprint.html?entry=38488.

Public library (essay)

81

the library should be transformed into once our society, obsessed with market logic, has made it impossible for the library to perform its main mission. Such
proposals, if they do not insist on universal access
to knowledge for all members, are Trojan horses for
the silent but galloping disappearance of the public
library from the historical stage. Sociability—produced by public libraries, with all the richness of its
various appearances—will be best preserved if we
manage to fight for the values upon which we have
built the public library: universal access to knowledge for each member of our society.
Freedom, equality, and brotherhood need brave librarians practicing civil disobedience.
Library Genesis, aaaaarg.org, Monoskop, UbuWeb
are all examples of fragile knowledge infrastructures
built and maintained by brave librarians practicing
civil disobedience which the world of researchers
in the humanities rely on. These projects are re-inventing the public library in the gap left by today’s
institutions in crisis.
Library Genesis15 is an online repository with over
a million books and is the first project in history to
offer everyone on the Internet free download of its
entire book collection (as of this writing, about fifteen terabytes of data), together with the all metadata
(MySQL dump) and PHP/HTML/Java Script code
for webpages. The most popular earlier reposito15 See http://libgen.org/.

82

M. Mars • M. Zarroug • T. Medak

ries, such as Gigapedia (later Library.nu), handled
their upload and maintenance costs by selling advertising space to the pornographic and gambling
industries. Legal action was initiated against them,
and they were closed.16 News of the termination of
Gigapedia/Library.nu strongly resonated among
academics and book enthusiasts circles and was
even noted in the mainstream Internet media, just
like other major world events. The decision by Library Genesis to share its resources has resulted
in a network of identical sites (so-called mirrors)
through the development of an entire range of Net
services of metadata exchange and catalog maintenance, thus ensuring an exceptionally resistant
survival architecture.
aaaaarg.org, started by the artist Sean Dockray, is
an online repository with over 50,000 books and
texts. A community of enthusiastic researchers from
critical theory, contemporary art, philosophy, architecture, and other fields in the humanities maintains,
catalogs, annotates, and initiates discussions around
it. It also as a courseware extension to the self-organized education platform The Public School.17
16 Andrew Losowsky, “Library.nu, Book Downloading Site,
Targeted in Injunctions Requested by 17 Publishers,” Huffington Post, 15 February 2012, http://www.huffingtonpost.
com/2012/02/15/librarynu-book-downloading-injunction_
n_1280383.html.
17 “The Public School”, The Public School, n.d.,
https://www.thepublicschool.org/.

Public library (essay)

83

UbuWeb18 is the most significant and largest online
archive of avant-garde art; it was initiated and is lead
by conceptual artist Kenneth Goldsmith. UbuWeb,
although still informal, has grown into a relevant
and recognized critical institution of contemporary
art. Artists want to see their work in its catalog and
thus agree to a relationship with UbuWeb that has
no formal contractual obligations.
Monoskop is a wiki for the arts, culture, and media
technology, with a special focus on the avant-garde,
conceptual, and media arts of Eastern and Central
Europe; it was launched by Dušan Barok and others.
In the form of a blog Dušan uploads to Monoskop.
org/log an online catalog of curated titles (at the
moment numbering around 3,000), and, as with
UbuWeb, it is becoming more and more relevant
as an online resource.
Library Genesis, aaaaarg.org, Kenneth Goldsmith,
and Dušan Barok show us that the future of the
public library does not need crisis management,
venture capital, start-up incubators, or outsourcing but simply the freedom to continue extending
the dreams of Melvil Dewey, Paul Otlet19 and other
visionary librarians, just as it did before the emergence of the internet.

18 See http://ubu.com/.
19 “Paul Otlet”, Wikipedia, 27 October 2014,
http://en.wikipedia.org/wiki/Paul_Otlet.

84

M. Mars • M. Zarroug • T. Medak

With the emergence of the internet and software
tools such as Calibre and “[let’s share books],”20 librarianship has been given an opportunity, similar to astronomy and the project SETI@home21, to
include thousands of amateur librarians who will,
together with the experts, build a distributed peerto-peer network to care for the catalog of available
knowledge, because
a public library is:
— free access to books for every member of society
— library catalog
— librarian
With books ready to be shared, meticulously
cataloged, everyone is a librarian.
When everyone is librarian, library is
everywhere.22


20 “Tools”, Memory of the World, n.d.,
https://www.memoryoftheworld.org/tools/.
21 See http://setiathome.berkeley.edu/.
22 “End-to-End Catalog”, Memory of the World, 26 November 2012,
https://www.memoryoftheworld.org/end-to-end-catalog/.

Public library (essay)

85

Paul Otlet

Transformations
in the Bibliographical Apparatus
of the Sciences [1]
Repertory — Classification — Office
of Documentation
1. Because of its length, its extension to all countries,
the profound harm that it has created in everyone’s
life, the War has had, and will continue to have, repercussions for scientific productivity. The hour for
the revision of the old order is about to strike. Forced
by the need for economies of men and money, and
by the necessity of greater productivity in order to
hold out against all the competition, we are going to
have to introduce reforms into each of the branches
of the organisation of science: scientific research, the
preservation of its results, and their wide diffusion.
Everything happens simultaneously and the distinctions that we will introduce here are only to
facilitate our thinking. Always adjacent areas, or
even those that are very distant, exert an influence
on each other. This is why we should recognize the
impetus, growing each day even greater in the organisation of science, of the three great trends of
our times: the power of associations, technological
progress and the democratic orientation of institutions. We would like here to draw attention to some
of their consequences for the book in its capacity

Transformations In The Bibliographical
Apparatus Of The Sciences

87

as an instrument for recording what has been discovered and as a necessary means for stimulating
new discoveries.
The Book, the Library in which it is preserved,
and the Catalogue which lists it, have seemed for
a long time as if they had achieved their heights of
perfection or at least were so satisfactory that serious
changes need not be contemplated. This may have
been so up to the end of the last century. But for a
score of years great changes have been occurring
before our very eyes. The increasing production of
books and periodicals has revealed the inadequacy of
older methods. The increasing internationalisation
of science has required workers to extend the range
of their bibliographic investigations. As a result, a
movement has occurred in all countries, especially
Germany, the United States and England, for the
expansion and improvement of libraries and for
an increase in their numbers. Publishers have been
searching for new, more flexible, better-illustrated,
and cheaper forms of publication that are better-coordinated with each other. Cataloguing enterprises
on a vast scale have been carried out, such as the
International Catalogue of Scientific Literature and
the Universal Bibliographic Repertory. [2]
Three facts, three ideas, especially merit study
for they represent something really new which in
the future can give us direction in this area. They
are: The Repertory, Classification and the Office of
Documentation.
•••

88

Paul Otlet

2. The Repertory, like the book, has gradually been
increasing in size, and improvements in it suggest
the emergence of something new which will radically modify our traditional ideas.
From the point of view of form, a book can be
defined as a group of pages cut to the same format
and gathered together in such a way as to form a
whole. It was not always so. For a long time the
Book was a roll, a volumen. The substances which
then took the place of paper — papyrus and parchment — were written on continuously from beginning to end. Reading required unrolling. This was
certainly not very practical for the consultation of
particular passages or for writing on the verso. The
codex, which was introduced in the first centuries of
the modern era and which is the basis of our present
book, removed these inconveniences. But its faults
are numerous. It constitutes something completed,
finished, not susceptible of addition. The Periodical
with its successive issues has given science a continuous means of concentrating its results. But, in
its turn, the collections that it forms runs into the
obstacle of disorder. It is impossible to link similar
or connected items; they are added to one another
pell-mell, and research requires handling great masses of heavy paper. Of course indexes are a help and
have led to progress — subject indexes, sometimes
arranged systematically, sometimes analytically,
and indexes of names of persons and places. These
annual indexes are preceded by monthly abstracts
and are followed by general indexes cumulated every
five, ten or twenty-five years. This is progress, but
the Repertory constitutes much greater progress.

Transformations In The Bibliographical
Apparatus Of The Sciences

89

The aim of the Repertory is to detach what the
book amalgamates, to reduce all that is complex to
its elements and to devote a page to each. Pages, here,
are leaves or cards according to the format adopted.
This is the “monographic” principle pushed to its
ultimate conclusion. No more binding or, if it continues to exist, it will become movable, that is to
say, at any moment the cards held fast by a pin or a
connecting rod or any other method of conjunction
can be released. New cards can then be intercalated,
replacing old ones, and a new arrangement made.
The Repertory was born of the Catalogue. In
such a work, the necessity for intercalations was
clear. Nor was there any doubt as to the unitary or
monographic notion: one work, one title; one title,
one card. As a result, registers which listed the same
collections of books for each library but which had
constantly to be re-done as the collections expanded,
have gradually been discarded. This was practical
and justified by experience. But upon reflection one
wonders whether the new techniques might not be
more generally applied.
What is a book, in fact, if not a single continuous line which has initially been cut to the length
of a page and then cut again to the size of a justified
line? Now, this cutting up, this division, is purely
mechanical; it does not correspond to any division
of ideas. The Repertory provides a practical means
of physically dividing the book according to the
intellectual division of ideas.
Thus, the manuscript library catalogue on cards
has been quickly followed by catalogues printed on
cards (American Library Bureau, the Catalogue or

90

Paul Otlet

the Library of Congress in Washington) [3]; then by
bibliographies printed on cards (International Institute of Bibliography, Concilium Bibliographicum)
[4]; next, indices of species have been published on
cards (Index Speciorum) [5]. We have moved from
the small card to the large card, the leaf, and have
witnessed compendia abandoning the old form for
the new (Jurisclasseur, or legal digests in card form).
Even the idea of the encyclopedia has taken this
form (Nelson’s Perpetual Cyclopedia [6]).
Theoretically and technically, we now have in
the Repertory a new instrument for analytically or
monographically recording data, ideas, information. The system has been improved by divisionary cards of various shapes and colours, placed in
such a way that they express externally the outline
of the classification being used and reduce search
time to a minimum. It has been improved further
by the possibility of using, by cutting and pasting,
materials that have been printed on large leaves or
even books that have been published without any
thought of repertories. Two copies, the first providing the recto, the second the verso, can supply
all that is necessary. One has gone even further still
and, from the example of statistical machines like
those in use at the Census of Washington (sic) [7],
extrapolated the principle of “selection machines”
which perform mechanical searches in enormous
masses of materials, the machines retaining from
the thousands of cards processed by them only those
related to the question asked.
•••

Transformations In The Bibliographical
Apparatus Of The Sciences

91

3. But such a development, like the Repertory before it, presupposes a classification. This leads us to
examine the second practical idea that is bringing
about the transformation of the book.
Classification plays an enormous role in scientific thought. If one could say that a science was a
well-made language, one could equally assert that
it is a completed classification. Science is made up
of verified facts which are organised in a structure
of systems, hypotheses, theories, laws. If there is
a certain order in things, it is necessary to have it
also in science which reflects and explains nature.
That is why, since the time of Greek thought until
the present, constant efforts have been made to improve classification. These have taken three principal directions: classification studied as an activity
of the mind; the general classification and sequence
of the sciences; the systematization appropriate to
each discipline. The idea of order, class, genus and
species has been studied since Aristotle, in passing
by Porphyrus, by the scholastic philosophers and by
modern logicians. The classification of knowledge
goes back to the Greeks and owes much to the contributions of Bacon and the Renaissance. It was posed
as a distinct and separate problem by D’Alembert
and the Encyclopédie, and by Ampère, Comte, and
Spencer. The recent work of Manouvrier, Durand
de Cros, Goblot, Naville, de la Grasserie, has focussed on various aspects of it. [8] As to systematics,
one can say that this has become the very basis of
the organisation of knowledge as a body of science.
When one has demonstrated the existence of 28 million stars, a million chemical compounds, 300,000

92

Paul Otlet

vegetable species, 200,000 animal species, etc., it is
necessary to have a means, an Ariadne’s thread, of
finding one’s way through the labyrinth formed by
all these objects of study. Because there are sciences of beings as well as sciences of phenomena, and
because they intersect with each other as we better
understand the whole of reality, it is necessary that
this means be used to retrieve both. The state of development of a science is reflected at any given time
by its systematics, just as the general classification
of the sciences reflects the state of development of
the encyclopedia, of the philosophy of knowledge.
The need has been felt, however, for a practical
instrument of classification. The classifications of
which we have just spoken are constantly changing, at least in their detail if not in broad outline. In
practice, such instability, such variability which is
dependent on the moment, on schools of thought
and individuals, is not acceptable. Just as the Repertory had its origin in the catalogue, so practical
classification originated in the Library. Books represent knowledge and it is necessary to arrange them
in collections. Schemes for this have been devised
since the Middle Ages. The elaboration of grand
systems occurred in the 17th and 18th centuries
and some new ones were added in the 19th century. But when bibliography began to emerge as an
autonomous field of study, it soon began to develop
along the lines of the catalogue of an ideal library
comprising the totality of what had been published.
From this to drawing on library classifications was
but a step, and it was taken under certain conditions
which must be stressed.

Transformations In The Bibliographical
Apparatus Of The Sciences

93

Up to the present time, 170 different classifications
have been identified. Now, no cooperation is possible if everyone stays shut up in his own system. It
has been necessary, therefore, to choose a universal
classification and to recommend it as such in the
same way that the French Convention recognized
the necessity of a universal system of weights and
measures. In 1895 the first International Conference
of Bibliography chose the Decimal Classification
and adopted a complete plan for its development. In
1904, the edition of the expanded tables appeared. A
new edition was being prepared when the war broke
out Brussels, headquarters of the International Institute of Bibliography, which was doing this work,
was part of the invaded territory.
In its latest state, the Decimal Classification has
become an instrument of great precision which
can meet many needs. The printed tables contain
33,000 divisions and they have an alphabetical index consisting of about 38,000 words. Learning is
here represented in its entire sweep: the encyclopedia of knowledge. Its principle is very simple. The
empiricism of an alphabetical classification by subject-heading cannot meet the need for organising
and systematizing knowledge. There is scattering;
there is also the difficulty of dealing with the complex expressions which one finds in the modern terminology of disciplines like medicine, technology,
and the social sciences. Above all, it is impossible
to achieve any international cooperation on such
a national basis as language. The Decimal Classification is a vast systematization of knowledge, “the
table of contents of the tables of contents” of all

94

Paul Otlet

treatises. But, as it would be impossible to find a
particular subject’s relative place by reference to
another subject, a system of numbering is needed.
This is decimal, which an example will make clear.
Optical Physiology would be classified thus:
5 th Class
3rd Group
5th Division
7th Sub-division

Natural Sciences
Physics
Optics
Optical Physiology

or 535.7
This number 535.7 is called decimal because all
knowledge is taken as one of which each science is
a fraction and each individual subject is a decimal
subdivided to a lesser or greater degree. For the sake
of abbreviation, the zero of the complete number,
which would be 0.5357, has been suppressed because
the zero would be repeated in front of each number.
The numbers 5, 3, 5, 7 (which one could call five hundred and thirty-five point seven and which could
be arranged in blocks of three as for the telephone,
or in groups of twos) form a single number when
the implied words, “class, group, division and subdivision,” are uttered.
The classification is also called decimal because
all subjects are divided into ten classes, then each
of these into at least ten groups, and each group
into at least ten divisions. All that is needed for the
number 535.7 always to have the same meaning is
to translate the tables into all languages. All that is
needed to deal with future scientific developments

Transformations In The Bibliographical
Apparatus Of The Sciences

95

in optical physiology in all of its ramifications is to
subdivide this number by further decimal numbers
corresponding to the subdivisions of the subject
Finally, all that is needed to ensure that any document or item pertaining to optical physiology finds
its place within the sum total of scientific subjects
is to write this number on it In the alphabetic index
to the tables references are made from each word
to the classification number just as the index of a
book refers to page numbers.
This first remarkable principle of the decimal
classification is generally understood. Its second,
which has been introduced more recently, is less
well known: the combination of various classification numbers whenever there is some utility in expressing a compound or complex heading. In the
social sciences, statistics is 31 and salaries, 331.2. By
a convention these numbers can be joined by the
simple sign : and one may write 31:331.2 statistics
of salaries.01
This indicates a general relationship, but a subject also has its place in space and time. The subject
may be salaries in France limited to a period such as
the 18th century (that is to say, from 1700 to 1799).
01 The first ten divisions are: 0 Generalities, 1 Philosophy, 2
Religion, 3 Social Sciences, 4 Philology, Language, 5 Pure
Sciences, 6 Applied Science, Medicine, 7 Fine Arts, 8 Literature, 9 History and Geography. The Index number 31 is
derived from: 3rd class social sciences, 1st group statistics. The
Index number 331.2 is derived from 3rd class social sciences,
3rd group political economy, 1st division topics about work,
2nd subdivision salaries.

96

Paul Otlet

The sign that characterises division by place being
the parenthesis and that by time quotation marks
or double parentheses, one can write:
33:331.2 (44) «17» statistics — of salaries — in
France — in the 17th century
or ten figures and three signs to indicate, in terms
of the universe of knowledge, four subordinated
headings comprising 42 letters. And all of these
numbers are reversible and can be used for geographic or chronologic classification as well as for
subject classification:
(44) 31:331.2 «17»
France — Statistics — Salaries — 17th Century
«17» (44) 31:331.2
17th Century — France — Statistics — Salaries
The subdivisions of relation and location explained
here, are completed by documentary subdivisions
for the form and the language of the document (for
example, periodical, in Italian), and by functional
subdivisions (for example, in zoology all the divisions by species of animal being subdivided by biological aspects). It follows by virtue of the law of
permutations and combinations that the present
tables of the classification permit the formulation
at will of millions of classification numbers. Just as
arithmetic does not give us all the numbers readymade but rather a means of forming them as we
need them, so the classification gives us the means

Transformations In The Bibliographical
Apparatus Of The Sciences

97

of creating classification numbers insofar as we have
compound headings that must be translated into a
notation of numbers.
Like chemistry, mathematics and music, bibliography thus has its own extremely simple notations:
numbers. Immediately and without confusion, it
allows us to find a place for each idea, for each thing
and consequently for each book, article, or document and even for each part of a book or document
Thus it allows us to take our bearings in the midst
of the sources of knowledge, just as the system of
geographic coordinates allows us to take our bearings on land or sea.
One may well imagine the usefulness of such a
classification to the Repertory. It has rid us of the
difficulty of not having continuous pagination. Cards
to be intercalated can be placed according to their
class number and the numbering is that of tables
drawn up in advance, once and for all, and maintained with an unvarying meaning. As the classification has a very general use, it constitutes a true
documentary classification which can be used in
various kinds of repertories: bibliographic repertories; catalogue-like repertories of objects, persons,
phenomena; and documentary repertories of files
made up of written or printed materials of all kinds.
The possibility can be envisaged of encyclopedic
repertories in which are registered and integrated
the diverse data of a scientific field and which draw
for this purpose on materials published in periodicals. Let each article, each report, each item of news
henceforth carry a classification number and, automatically, by clipping, encyclopedias on cards can

98

Paul Otlet

be created in which all the results of international
scientific cooperation are brought together at the
same number. This constitutes a profound change
in the technology of the Book, since the repertory
thus formed is simultaneously a constantly up-dated book and a cooperative book in which are found
printed elements produced in all locations.
•••
4. If we can realize the third idea, the Office of Documentation, then reform will be complete. Such an
office is the old library, but adapted to a new function. Hitherto the library has been a museum of
books. Works were preserved in libraries because
they were precious objects. Librarians were keepers.
Such establishments were not organised primarily
for the use of documents. Moreover, their outmoded
regulations if they did not exclude the most modern
forms of publication at least did not admit them.
They have poor collections of journals; collections
of newspapers are nearly nonexistent; photographs,
films, phonograph discs have no place in them, nor
do film negatives, microscopic slides and many other “documents.” The subject catalogue is considered
secondary in the library so long as there is a good
register for administrative purposes. Thus there is
little possibility of developing repertories in the
library, that is to say of taking publications to pieces and redistributing them in a more directly and
quickly accessible form. For want of personnel to
arrange them, there has not even been a place for
the cards that are received already printed.

Transformations In The Bibliographical
Apparatus Of The Sciences

99

The Office of Documentation, on the contrary, is
conceived of in such a way as to achieve all that is
lacking in the library. Collections of books are the
necessary basis for it, but books, far from being
considered as finished products, are simply materials which must be developed more fully. This
development consists in establishing the connections each individual book has with all of the other
books and forming from them all what might be
called The Universal Book. It is for this that we use
repertories: bibliographic repertories; repertories of
documentary dossiers gathering pamphlets and extracts together by subject; catalogues; chronological
repertories of facts or alphabetical ones of names;
encyclopedic repertories of scientific data, of laws,
of patents, of physical and technical constants, of
statistics, etc. All of these repertories will be set up
according to the method described above and arranged by the same universal classification. As soon
as an organisation to contain these repertories is
created, the Office of Documentation, one may be
sure that what happened to the book when libraries
first opened — scientific publication was regularised
and intensified — will happen to them. Then there
will be good reason for producing in bibliographies,
catalogues, and above all in books and periodicals
themselves, the rational changes which technology and the creative imagination suggest. What is
still an exception today will be common tomorrow.
New possibilities will exist for cooperative work
and for the more effective organisation of science.
•••

100

Paul Otlet

5. Repertory, Classification, Office of Documentation are therefore the three related elements of a
single reform in our methods of registering scientific discoveries and making them available to the
greatest number of people. Already one must speak
less of experiments and uncertain trials than of the
beginning of serious achievement. The International Institute of Bibliography in Brussels constitutes
a vast intellectual cooperative whose members are
becoming more numerous each day. Associations,
scientific establishments, periodical publications,
scientific and technical workers of every kind are
affiliating with it. Its repertories contain millions of
cards. There are sections in several countries02 . But
this was before the War. Since its outbreak, a movement in France, England and the United States has
been emerging everywhere to improve the organisation of the Book. The Office of Documentation has
been suggested as the solution for the requirements
that have been discussed.
It is important that the world of science and
technology should support this movement and
above all that it should endeavour to apply the new
methods to the works which it will be necessary to
re-organise. Among the most important of these is
the International Catalogue of Scientific Literature,
that fine and great work begun at the initiative of the
Royal Society of London. Until now, this work has
02 In France, the Bureau Bibliographique de Paris and great
associations such as the Société pour l’encouragement de
l’industrie nationale, l’Association pour l’avancement des
sciences, etc., are affiliated with it.

Transformations In The Bibliographical
Apparatus Of The Sciences

101

been carried on without relation to other works of
the same kind: it has not recognised the value of a
card repertory or a universal classification. It must
recognise them in the future.03 ❧

03 See Paul Otlet, “La Documentation et I’information au service de I’industrie”, Bulletin de la Société d’encouragement
de l’industrie nationale, June 1917. — La Documentation au
service de l’invention. Euréka, October 1917. — L’Institut
International de Bibliographie, Bibliographie de la France,
21 December 1917. — La Réorganisation du Catalogue international de la littérature scientifique. Revue générale des
sciences, IS February 1918. The publications of the Institute,
especially the expanded tables of the Decimal Classification,
have been deposited at the Bureau Bibliographique de Paris,
44 rue de Rennes at the apartments of the Société de l’encouragement. — See also the report presented by General
Sebert (9] to the Congrès du Génie civil, in March 1918 and
whose conclusions about the creation in Paris of a National
Office of Technical Documentation have been adopted.

102

Paul Otlet

Editor’s Notes
[1] “Transformations operées dans l’appareil bibliographique
des sciences,” Revue scientifique 58 (1918): 236-241.
[2] The International Catalogue of Scientific Literature, an enormous work, was compiled by a Central Bureau under the
sponsorship of the Royal Society from material sent in from
Regional Bureaus around the world. It was published annually beginning in 1902 in 17 parts each corresponding to
a major subject division and comprising one or more volumes. Publication was effectively suspended in 1914. By the
time war broke out, the Universal Bibliographic Repertory
contained over 11 million entries.
[3] For card publication by the Library Bureau and Library of
Congress, see Edith Scott, “The Evolution of Bibliographic
Systems in the United States, 1876–1945” and Editor’s Note
36 to the second paper and Note 5 to the seventh paper in
International Organisation and Dissemination of Knowledge; Selected Essays of Paul Otlet, translated and edited by
W. Boyd Rayward. Amsterdam: Elsevier, 1990: 148–156.
[4] Otlet refers to the Concilium Bibliographicum also in Paper
No. 7, “The Reform of National Bibliographies...” in International Organisation and Dissemination of Knowledge; Selected
Essays of Paul Otlet. See also Editor’s Note 5 in that paper
for the major bibliographies published by the Concilium
Bibliographicum.
[5] A possible example of what Otlet is referring to here is the
Gray Herbarium Index. This was “planned to provide cards
for all the names of vascular plant taxa attributable to the

Transformations In The Bibliographical
Apparatus Of The Sciences

103

Western Hemisphere beginning with the literature of 1886”
(Gray Herbarium Index, Preface, p. iii). Under its first compiler, 20 instalments consisting in all of 28,000 cards were
issued between 1894 and 1903. It has been continued after
that time and was for many years “issued quarterly at the
rate of about 4,000 cards per year.” At the time the cards
were reproduced in a printed catalogue by G. K. Hall in 1968,
there were 85 subscribers to the card sets.
[6] Nelson’s Perpetual Loose-Leaf Encylcopedia was a popular,
12-volume work which went through many editions, its
principle being set down at the beginning of the century.
It was published in binders and the publisher undertook to
supply a certain number of pages of revisions (or renewals)
semi-annually after each edition, the first of which appeared
in 1905. An interesting reference presumably to this work
occurs in a notice, “An Encylcopedia on the Card-Index System,” in the Scientific American 109 (1913): 213. The Berlin
Correspondent of the journal reports a proposal made in
Berlin which contains “an idea, in a sense ... already carried
out in an American loose-leaf encyclopedia, the publishers
of which supply new pages to take the place of those that
are obsolete” (Nelsons, an English firm, set up a New York
branch in 1896. Publication in the U.S. of works to be widely
circulated there was a requirement of the copyright law.)
The reporter observes that the principle suggested “affords
a means of recording all facts at present known as well as
those to be discovered in the future, with the same safety
and ease as though they were registered in our memory, by
providing a universal encyclopedia, incessantly keeping
abreast of the state of human knowledge.” The “bookish”
form of conventional encyclopedias acts against its future
success. “In the case of a mere storehouse of facts the in-

104

Paul Otlet

finitely more mobile form of the card index should however
be adopted, possibly,” the author goes on making a most interesting reference, “in conjunction with Dr. Goldschmidt’s
Microphotographic Library System.” The need for a central
institute, the nature of its work, the advantages of the work
so organised are described in language that is reminiscent
of that of Paul Otlet (see also the papers of Goldschmidt
and Otlet translated in International Organisation and
Dissemination of Knowledge; Selected Essays of Paul Otlet).
[7] These machines were derived from Herman Hollerith’s
punched cards and tabulating machines. Hollerith had
introduced them under contract into the U.S. Bureau of
the Census for the 1890 census. This equipment was later
modified and developed by the Bureau. Hollerith, his invention and his business connections lie at the roots of the
present IBM company. The equipment and its uses in the
census from 1890 to 1910 are briefly described in John H.
Blodgett and Claire K. Schultz, “Herman Hollerith: Data
Processing Pioneer,” American Documentation 20 (1969):
221-226. As they observe, suggesting the accuracy of Otlet’s
extrapolation, “his was not simply a calculating machine,
it performed selective sorting, an operation basic to all information retrieval.”
[8] The history of the classification of knowledge has been treated
in English in detail by E.C. Richardson in his Classification
Theoretical and Practical, the first edition of which appeared
in 1901 and was followed by editions in 1912 and 1930. A
different treatment is given in Robert Flint’s Philosophy as
Scientia Scientarium: a History of the Classification of the
Sciences which appeared in 1904. Neither of these works
deal with Manouvrier, a French anthropologist, or Durand

Transformations In The Bibliographical
Apparatus Of The Sciences

105

de Cros. Joseph-Pierre Durand, sometimes called Durand
de Cros after his birth place, was a French physiologist and
philosopher who died in 1900. In his Traité de documentation,
in the context of his discussion of classification, Otlet refers
to an Essai de taxonomie by Durand published by Alcan. It
seems that this is an error for Aperçus de taxonomie (Alcan,
1899).
[9] General Hippolyte Sebert was President of the Association française pour l’avancement des sciences, and the Société d’encouragement pour l’industrie nationale. He had
been active in the foundation of the Bureau bibliographique
de Paris. For other biographical information about him see
Editor’s Note 9 to Paper no 17, “Henri La Fontaine”, in International Organisation and Dissemination of Knowledge;
Selected Essays of Paul Otlet.

English translation of the Paul Otlet’s text published with the
permission of W. Boyd Rayward. The translation was originally
published as Paul Otlet, “Transformations in the Bibliographical
Apparatus of the Sciences: Repertory–Classification–Office of
Documentation”, in International Organisation and Dissemination of Knowledge; Selected Essays of Paul Otlet, translated and
edited by W. Boyd Rayward, Amsterdam: Elsevier, 1990: 148–156.

106

Paul Otlet

107

108

public library

http://aaaaarg.org/

109

McKenzie Wark

Metadata Punk

So we won the battle but lost the war. By “we”, I
mean those avant-gardes of the late twentieth century whose mission was to free information from the
property form. It was always a project with certain
nuances and inconsistencies, but over-all it succeeded beyond almost anybody’s wildest dreams. Like
many dreams, it turned into a nightmare in the end,
the one from which we are now trying to awake.
The place to start is with what the situationists
called détournement. The idea was to abolish the
property form in art by taking all of past art and
culture as a commons from which to copy and correct. We see this at work in Guy Debord’s texts and
films. They do not quote from past works, as to do
so acknowledges their value and their ownership.
The elements of détournement are nothing special.
They are raw materials for constructing theories,
narratives, affects of a subjectivity no longer bound
by the property form.
Such a project was recuperated soon enough
back into the art world as “appropriation.” Richard
Prince is the dialectical negation of Guy Debord,

Metadata Punk

111

in that appropriation values both the original fragment and contributes not to a subjectivity outside of
property but rather makes a career as an art world
star for the appropriating artist. Of such dreams is
mediocrity made.
If there was a more promising continuation of
détournement it had little to do with the art world.
Détournement became a social movement in all but
name. Crucially, it involved an advance in tools,
from Napster to Bitorrent and beyond. It enabled
the circulation of many kinds of what Hito Steyerl
calls the poor image. Often low in resolution, these
détourned materials circulated thanks both to the
compression of information but also because of the
addition of information. There might be less data
but there’s added metadata, or data about data, enabling its movement.
Needless to say the old culture industries went
into something of a panic about all this. As I wrote
over ten years ago in A Hacker Manifesto, “information wants to be free but is everywhere in chains.”
It is one of the qualities of information that it is indifferent to the medium that carries it and readily
escapes being bound to things and their properties.
Yet it is also one of its qualities that access to it can
be blocked by what Alexander Galloway calls protocol. The late twentieth century was — among other
things — about the contradictory nature of information. It was a struggle between détournement and
protocol. And protocol nearly won.
The culture industries took both legal and technical steps to strap information once more to fixity
in things and thus to property and scarcity. Inter-

112

McKenzie Wark

estingly, those legal steps were not just a question of
pressuring governments to make free information
a crime. It was also a matter of using international
trade agreements as a place outside the scope of de­
mo­­cratic oversight to enforce the old rules of property. Here the culture industries join hands with the
drug cartels and other kinds of information-based
industry to limit the free flow of information.
But laws are there to be broken, and so are protocols of restriction such as encryption. These were
only ever delaying tactics, meant to shore up old
monopoly business for a bit longer. The battle to
free information was the battle that the forces of
détournement largely won. Our defeat lay elsewhere.
While the old culture industries tried to put information back into the property form, there were
other kinds of strategy afoot. The winners were not
the old culture industries but what I call the vulture
industries. Their strategy was not to try to stop the
flow of free information but rather to see it as an
environment to be leveraged in the service of creating a new kind of business. “Let the data roam free!”
says the vulture industry (while quietly guarding
their own patents and trademarks). What they aim
to control is the metadata.
It’s a new kind of exploitation, one based on an
unequal exchange of information. You can have the
little scraps of détournement that you desire, in exchange for performing a whole lot of free labor—and
giving up all of the metadata. So you get your little
bit of data; they get all of it, and more importantly,
any information about that information, such as
the where and when and what of it.

Metadata Punk

113

It is an interesting feature of this mode of exploitation that you might not even be getting paid for your
labor in making this information—as Trebor Scholz
as pointed out. You are working for information
only. Hence exploitation can be extended far beyond
the workplace and into everyday life. Only it is not
so much a social factory, as the autonomists call it.
This is more like a social boudoir. The whole of social
space is in some indeterminate state between public
and private. Some of your information is private to
other people. But pretty much all of it is owned by
the vulture industry — and via them ends up in the
hands of the surveillance state.
So this is how we lost the war. Making information free seemed like a good idea at the time. Indeed, one way of seeing what transpired is that we
forced the ruling class to come up with these new
strategies in response to our own self-organizing
activities. Their actions are reactions to our initiatives. In this sense the autonomists are right, only
it was not so much the actions of the working class
to which the ruling class had to respond in this case,
as what I call the hacker class. They had to recuperate a whole social movement, and they did. So our
tactics have to change.
In the past we were acting like data-punks. Not
so much “here’s three chords, now form your band.”
More like: “Here’s three gigs, now go form your autonomous art collective.” The new tactic might be
more question of being metadata-punks. On the one
hand, it is about freeing information about information rather than the information itself. We need
to move up the order of informational density and

114

McKenzie Wark

control. On the other hand, it might be an idea to
be a bit discreet about it. Maybe not everyone needs
to know about it. Perhaps it is time to practice what
Zach Blas calls infomatic opacity.
Three projects seem to embody much of this
spirit to me. One I am not even going to name or
discuss, as discretion seems advisable in that case.
It takes matters off the internet and out of circulation among strangers. Ask me about it in person if
we meet in person.
The other two are Monoskop Log and UbuWeb.
It is hard to know what to call them. They are websites, archives, databases, collections, repositories,
but they are also a bit more than that. They could be
thought of also as the work of artists or of curators;
of publishers or of writers; of archivists or researchers. They contain lots of files. Monoskop is mostly
books and journals; UbuWeb is mostly video and
audio. The work they contain is mostly by or about
the historic avant-gardes.
Monoskop Log bills itself as “an educational
open access online resource.” It is a component part
of Monoskop, “a wiki for collaborative studies of
art, media and the humanities.” One commenter
thinks they see the “fingerprint of the curator” but
nobody is named as its author, so let’s keep it that
way. It is particularly strong on Eastern European
avant-garde material. UbuWeb is the work of Kenneth Goldsmith, and is “a completely independent
resource dedicated to all strains of the avant-garde,
ethnopoetics, and outsider arts.”
There’s two aspects to consider here. One is the
wealth of free material both sites collect. For any-

Metadata Punk

115

body trying to teach, study or make work in the
avant-garde tradition these are very useful resources.
The other is the ongoing selection, presentation and
explanation of the material going on at these sites
themselves. Both of them model kinds of ‘curatorial’
or ‘publishing’ behavior.
For instance, Monoskop has wiki pages, some
better than Wikipedia, which contextualize the work
of a given artist or movement. UbuWeb offers “top
ten” lists by artists or scholars which give insight
not only into the collection but into the work of the
person making the selection.
Monoskop and UbuWeb are tactics for intervening in three kinds of practices, those of the artworld, of publishing and of scholarship. They respond to the current institutional, technical and
political-economic constraints of all three. As it
says in the Communist Manifesto, the forces for social change are those that ask the property question.
While détournement was a sufficient answer to that
question in the era of the culture industries, they try
to formulate, in their modest way, a suitable tactic
for answering the property question in the era of
the vulture industries.
This takes the form of moving from data to metadata, expressed in the form of the move from writing
to publishing, from art-making to curating, from
research to archiving. Another way of thinking this,
suggested by Hiroki Azuma would be the move from
narrative to database. The object of critical attention
acquires a third dimension, a kind of informational
depth. The objects before us are not just a text or an
image but databases of potential texts and images,
with metadata attached.

116

McKenzie Wark

The object of any avant-garde is always to practice the relation between aesthetics and everyday
life with a new kind of intensity. UbuWeb and
Monoskop seem to me to be intimations of just
such an avant-garde movement. One that does not
offer a practice but a kind of meta-practice for the
making of the aesthetic within the everyday.
Crucial to this project is the shifting of aesthetic
intention from the level of the individual work to the
database of works. They contain a lot of material, but
not just any old thing. Some of the works available
here are very rare, but not all of them are. It is not
just rarity, or that the works are available for free.
It is more that these are careful, artful, thoughtful
collections of material. There are the raw materials here with which to construct a new civilization.
So we lost the battle, but the war goes on. This
civilization is over, and even its defenders know it.
We live in among ruins that accrete in slow motion.
It is not so much a civil war as an incivil war, waged
against the very conditions of existence of life itself.
So even if we have no choice but to use its technologies and cultures, the task is to build another way
of life among the ruins. Here are some useful practices, in and on and of the ruins. ❧

Metadata Punk

117

118

public library

http://midnightnotes.memoryoftheworld.org/

119

Tomislav Medak

The Future After the Library
UbuWeb and Monoskop’s
Radical Gestures

The institution of the public library has crystallized,
developed and advanced around historical junctures
unleashed by epochal economic, technological and
political changes. A series of crises since the advent
of print have contributed to the configuration of the
institutional entanglement of the public library as
we know it today:01 defined by a publicly available
collection, housed in a public building, indexed and
made accessible with a help of a public catalog, serviced by trained librarians and supported through
public financing. Libraries today embody the idea
of universal access to all knowledge, acting as custodians of a culture of reading, archivists of material
and ephemeral cultural production, go-betweens
of information and knowledge. However, libraries have also embraced a broader spirit of public
service and infrastructure: providing information,
01 For the concept and the full scope of the contemporary library
as institutional entanglement see Shannon Mattern, “Library
as Infrastructure”, Places Journal, accessed April 9, 2015,
https://placesjournal.org/article/library-as-infrastructure/.

The Future After the Library

121

education, skills, assistance and, ultimately, shelter
to their communities — particularly their most vulnerable members.
This institutional entanglement, consisting in
a comprehensive organization of knowledge, universally accessible cultural goods and social infrastructure, historically emerged with the rise of (information) science, social regulation characteristic
of modernity and cultural industries. Established
in its social aspect as the institutional exemption
from the growing commodification and economic
barriers in the social spheres of culture, education
and knowledge, it is a result of struggles for institutionalized forms of equality that still reflect the
best in solidarity and universality that modernity
had to offer. Yet, this achievement is marked by
contradictions that beset modernity at its core. Libraries and archives can be viewed as an organon
through which modernity has reacted to the crises
unleashed by the growing production and fixation
of text, knowledge and information through a history of transformations that we will discuss below.
They have been an epistemic crucible for the totalizing formalizations that have propelled both the
advances and pathologies of modernity.
Positioned at a slight monastic distance and indolence toward the forms of pastoral, sovereign or
economic domination that defined the surrounding world that sustained them, libraries could never
close the rift or between the universalist aspirations
of knowledge and their institutional compromise.
Hence, they could never avoid being the battlefield
where their own, and modernity’s, ambivalent epis-

122

Tomislav Medak

temic and social character was constantly re-examined and ripped asunder. It is this ambivalent
character that has been a potent motor for critical theory, artistic and political subversion — from
Marx’s critique of political economy, psychoanalysis
and historic avant-gardes, to revolutionary politics.
Here we will examine the formation of the library
as an epistemic and social institution of modernity
and the forms of critical engagement that continue
to challenge the totalizing order of knowledge and
appropriation of culture in the present.
Here Comes the Flood02
Prior to the advent of print, the collections held in
monastic scriptoria, royal courts and private libraries
typically contained a limited number of canonical
manuscripts, scrolls and incunabula. In Medieval
and early Renaissance Europe the canonized knowledge considered necessary for the administration of
heavenly and worldly affairs was premised on reading and exegesis of biblical and classical texts. It is
02 The metaphor of the information flood, here incanted in the
words of Peter Gabriel’s song with apocalyptic overtones, as
well as a good part of the historic background of the development of index card catalog in the following paragraphs
are based on Markus Krajewski, Paper Machines: About
Cards & Catalogs, 1548–1929 (MIT Press, 2011). The organizing idea of Krajewski’s historical account, that the index
card catalog can be understood as a Turing machine avant
la lettre, served as a starting point for the understanding
of the library as an epistemic institution developed here.

The Future After the Library

123

estimated that by the 15th century in Western Europe
there were no more than 5 million manuscripts held
mainly in the scriptoria of some 21,000 monasteries and a small number of universities. While the
number of volumes had grown sharply from less
than 0.8 million in the 12th century, the number of
monasteries had remained constant throughout that
period. The number of manuscripts read averaged
around 1,000 per million inhabitants, with the total
population of Europe peaking around 60 million.03
All in all, the book collections were small, access was
limited and reading culture played a marginal role.
The proliferation of written matter after the invention of mechanical movable type printing would
greatly increase the number of books, but also the
patterns of literacy and knowledge production. Already in the first fifty years after Gutenberg’s invention, 12 million volumes were printed, and from
this point onwards the output of printing presses
grew exponentially to 700 million volumes in the
18th century. In the aftermath of the explosion in
book production the cost of producing and buying
books fell drastically, reducing the economic barriers to literacy, but also creating a material vector
for a veritable shift of the epistemic paradigm. The
03 For an economic history of the book in the Western Europe
see Eltjo Buringh and Jan Luiten Van Zanden, “Charting
the ‘Rise of the West’: Manuscripts and Printed Books in
Europe, A Long-Term Perspective from the Sixth through
Eighteenth Centuries”, The Journal of Economic History 69,
No. 02 (June 2009): 409–45, doi:10.1017/S0022050709000837,
particularly Tables 1-5.

124

Tomislav Medak

emerging reading public was gaining access to the
new works of a nascent Enlightenment movement,
ushering in the modern age of science. In parallel
with those larger epochal transformations, the explosion of print also created a rising tide of new books
that suddenly inundated the libraries. The libraries
now had to contend both with the orders-of-magnitude greater volume of printed matter and the
growing complexity of systematically storing, ordering, classifying and tracking all of the volumes
in their collection. An once almost static collection
of canonical knowledge became an ever expanding
dynamic flux. This flood of new books, the first of
three to follow, presented principled, infrastructural and organizational challenges to the library that
radically transformed and coalesced its functions.
The epistemic shift created by this explosion of
library holdings led to a revision of the assumption
that the library is organized around a single holy
scripture and a small number of classical sources.
Coextensive with the emergence and multiplication of new sciences, the books that were entering
the library now covered an ever diversified scope
of topics and disciplines. And the sheer number of
new acquisitions demanded the physical expansion of libraries, which in turn required a radical
rethinking of the way the books were stored, displayed and indexed. In fact, the flood caused by the
printing press was nothing short of a revolution in
the organization, formalization and processing of
information and knowledge. This becomes evident
in the changes that unfolded between the 16th and
the early 20th in the cataloging of library collections.

The Future After the Library

125

The initial listings of books were kept in bound
volumes, books in their own right. But as the number of items arriving into the library grew, the constant need to insert new entries made the bound
book format increasingly impractical for library
catalogs. To make things more complicated still,
the diversification of the printed matter demanded
a richer bibliographic description that would allow
better comprehension of what was contained in the
volumes. Alongside the name of the author and the
book’s title, the description now needed to include
the format of the volume, the classification of the
subject matter and the book’s location in the library.
As the pace of new arrivals accelerated, the effort to
create a library catalog became unending, causing a
true crisis in the emerging librarian profession. This
would result in a number of physical and epistemic
innovations in the organization and formalization
of information and knowledge. The requirement
to constantly rearrange the order of entries in the
listing lead to the eventual unbinding of the bound
catalog into separate slips of paper and finally to the
development of the index card catalog. The unbound
index cards and their floating rearrangement, not
unlike that of the movable type, would in turn result in the design of filing cabinets. From Conrad
Gessner’s Bibliotheca Universalis, a three-volume
book-format catalog of around 3,000 authors and
10,000 texts, arranged alphabetically and topically,
published in the period 1545–1548; Gottfried Wilhelm Leibniz’s proposals for a universal library
during his tenure at the Wolfenbüttel library in the
late 17th century; to Gottfried van Swieten’s catalog

126

Tomislav Medak

of the Viennese court library, the index card catalog and the filing cabinets would develop almost to
their present form.04
The unceasing inflow of new books into the library
prompted the need to spatially organize and classify
the arrangement of the collection. The simple addition of new books to the shelves by size; canonical
relevance or alphabetical order, made little sense
in a situation where the corpus of printed matter
was quickly expanding and no individual librarian
could retain an intimate overview of the library’s
entire collection. The inflow of books required that
the brimming shelf-space be planned ahead, while
the increasing number of expanding disciplines required that the collection be subdivided into distinct
sections by fields. First the shelves became classified
and then the books individually received a unique
identifier. With the completion of the Josephinian
catalog in the Viennese court library, every book became compartmentalized according to a systematic
plan of sciences and assigned a unique sequence of
a Roman numeral, a Roman letter and an Arabic
numeral by which it could be tracked down regardless of its physical location.05 The physical location
of the shelves in the library no longer needed to be
reflected in the ordering of the catalog, and the catalog became a symbolic representation of the freely
re-arrangeable library. In the technological lingo of
today, the library required storage, index, search
and address in order to remain navigable. It is this
04 Krajewski, Paper Machines, op. cit., chapter 2.
05 Ibid., 30.

The Future After the Library

127

formalization of a universal system of classification
of objects in the library with the relative location of
objects and re-arrangeable index that would then in
1876 receive its present standardized form in Melvil
Dewey’s Decimal System.
The development of the library as an institution of
public access and popular literacy did not proceed
apace with the development of its epistemic aspects.
It was only a series of social upheavals and transformations in the course of the 18th and 19th century
that would bring about another flood of books and
political demands, pushing the library to become
embedded in an egalitarian and democratic political culture. The first big step in that direction came
with the decision of the French revolutionary National Assembly from 2 November 1789 to seize all
book collections from the Church and aristocracy.
Million of volumes were transferred to the Bibliothèque Nationale and local libraries across France.
In parallel, particularly in England, capitalism was
on the rise. It massively displaced the impoverished rural population into growing urban centers,
propelled the development of industrial production and, by the mid-19th century, introduced the
steam-powered rotary press into the book business.
As books became more easily, and mass produced,
the commercial subscription libraries catering to the
better-off parts of society blossomed. This brought
the class aspect of the nascent demand for public
access to books to the fore. After the failed attempts
to introduce universal suffrage and end the system
of political representation based on property entitlements in 1830s and 1840s, the English Chartist

128

Tomislav Medak

movement started to open reading rooms and cooperative lending libraries that would quickly become
a popular hotbed of social exchanges between the
lower classes. In the aftermath of the revolutionary
upheavals of 1848, the fearful ruling classes heeded
the demand for tax-financed public libraries, hoping
that the access to literature and edification would
ultimately hegemonize the working class for the
benefits of capitalism’s culture of self-interest and
competition.06
The Avant-gardes in the Library
As we have just demonstrated, the public library
in its epistemic and social aspects coalesced in the
context of the broader social transformations of
modernity: early capitalism and processes of nation-building in Europe and the USA. These transformations were propelled by the advancement of
political and economic rationalization, public and
business administration, statistical and archival
procedures. Archives underwent a corresponding and largely concomitant development with the
libraries, responding with a similar apparatus of
classification and ordering to the exponential expansion of administrative records documenting the
social world and to the historicist impulse to capture the material traces of past events. Overlaying
the spatial organization of documentation; rules
06 For the social history of public library see Matthew Battles,
Library: An Unquiet History (Random House, 2014) chapter
5: “Books for all”.

The Future After the Library

129

of its classification and symbolic representation of
the archive in reference tools, they tried to provide
a formalization adequate to the passion for capturing historical or present events. Characteristic
of the ascendant positivism of the 19th century, the
archivists’ and librarians’ epistemologies harbored
a totalizing tendency that would become subject to
subversion and displacement in the first decades of
the 20th century.
The assumption that the classificatory form can
fully capture the archival content would become
destabilized over and over by the early avant-gardist
permutations of formal languages of classification:
dadaist montage of the contingent compositional
elements, surrealist insistence on the unconscious
surpluses produced by automatized formalized language, constructivist foregrounding of dynamic and
spatialized elements in the acts of perception and
cognition of an artwork.07 The material composition
of the classified and ordered objects already contained formalizations deposited into those objects
by the social context of their provenance or projected onto them by the social situation of encounter
with them. Form could become content and content
could become form. The appropriations, remediations and displacements exacted by the neo-avantgardes in the second half of the 20th century pro07 Sven Spieker, The Big Archive: Art from Bureaucracy (MIT
Press, 2008) provides a detailed account of strategies that
the historic avant-gardes and the post-war art have developed toward the classificatory and ordering regime of the
archive.

130

Tomislav Medak

duced subversions, resignifications and simulacra
that only further blurred the lines between histories
and their construction, dominant classifications and
their immanent instabilities.
Where does the library fit into this trajectory? Operating around an uncertain and politically embattled universal principle of public access to knowledge
and organization of information, libraries continued being sites of epistemic and social antagonisms,
adaptations and resilience in response to the challenges created by the waves of radical expansion of
textuality and conflicting social interests between
the popular reading culture and the commodification of cultural consumption. This precarious position is presently being made evident by the third
big flood — after those unleashed by movable type
printing and the social context of industrial book
production — that is unfolding with the transition
of the book into the digital realm. Both the historic
mode of the institutional regulation of access and
the historic form of epistemic classification are
swept up in this transformation. While the internet
has made possible a radically expanded access to
digitized culture and knowledge, the vested interests of cultural industries reliant on copyright for
their control over cultural production have deepened the separation between cultural producers and
their readers, listeners and viewers. While the hypertextual capacity for cross-reference has blurred
the boundaries of the book, digital rights management technologies have transformed e-books into
closed silos. Both the decommodification of access
and the overcoming of the reified construct of the

The Future After the Library

131

self-enclosed work in the form of a book come at
the cost of illegality.
Even the avant-gardes in all their inappropriable
and idiosyncratic recalcitrance fall no less under
the legally delimited space of copyrightable works.
As they shift format, new claims of ownership and
appropriation are built. Copyright is a normative
classification that is totalizing, regardless of the
effects of leaky networks speaking to the contrary.
Few efforts have insisted on the subverting of juridical classification by copyright more lastingly than
the UbuWeb archive. Espousing the avant-gardes’
ethos of appropriation, for almost 20 years it has
collected and made accessible the archives of the
unknown; outsider, rare and canonized avant-gardes and contemporary art that would otherwise remained reserved for the vaults and restricted access
channels of esoteric markets, selective museological
presentations and institutional archives. Knowing
that asking to publish would amount to aligning itself with the totalizing logic of copyright, UbuWeb
has shunned the permission culture. At the level of
poetical operation, as a gesture of displacing the cultural archive from a regime of limited, into a regime
of unlimited access, it has created provocations and
challenges directed at the classifying and ordering
arrangements of property over cultural production.
One can only assume that as such it has become a
mechanism for small acts of treason for the artists,
who, short of turning their back fully on the institutional arrangements of the art world they inhabit,
use UbuWeb to release their own works into unlimited circulation on the net. Sometimes there might

132

Tomislav Medak

be no way or need to produce a work outside the
restrictions imposed by those institutions, just as
sometimes it is for academics impossible to avoid
the contradictory world of academic publishing,
yet that is still no reason to keep one’s allegiance to
their arrangements.
At the same time UbuWeb has played the game
of avant-gardist subversion: “If it doesn’t exist on
the internet, it doesn’t exist”. Provocation is most
effective when it is ignorant of the complexities of
the contexts that it is directed at. Its effect starts
where fissures in the defense of the opposition start
to show. By treating UbuWeb as massive evidence
for the internet as a process of reappropriation, a
process of “giving to all”, its volunteering spiritus
movens, Kenneth Goldsmith, has been constantly rubbing copyright apologists up the wrong way.
Rather than producing qualifications, evasions and
ambivalences, straightforward affirmation of copy­
ing, plagiarism and reproduction as a dominant
yet suppressed mode of operation of digital culture re-enacts the avant-gardes’ gesture of taking
no hostages from the officially sanctioned systems
of classification. By letting the incumbents of control over cultural production react to the norm of
copying, you let them struggle to dispute the norm
rather than you having to try to defend the norm.
UbuWeb was an early-comer, starting in 1996
and still functioning today on seemingly similar
technology, it’s a child of the early days of World
Wide Web and the promissory period of the experimental internet. It’s resolutely Web 1.0, with
a single maintainer, idiosyncratically simple in its

The Future After the Library

133

layout and programmatically committed to the
eventual obsolescence and sudden abandonment.
No platform, no generic design, no widgets, no
kludges and no community features. Only Beckett
avec links. Endgame.
A Book is an Index is an Index is an Index...
Since the first book flood, the librarian dream of
epistemological formalization has revolved around
the aspiration to cross-reference all the objects in
the collection. Within the physical library the topical designation has been relegated to the confines of
index card catalog that remained isolated from the
structure of citations and indexes in the books themselves. With the digital transition of the book, the
time-shifted hypertextuality of citations and indexes
became realizable as the immediate cross-referentiality of the segments of individual text to segments
of other texts and other digital artifacts across now
permeable boundaries of the book.
Developed as a wiki for collaborative studies of
art, media and the humanities, Monoskop.org took
up the task of mapping and describing avant-gardes and media art in Europe. In its approach both
indexical and encyclopedic, it is an extension of
the collaborative editing made possible by wiki
technology. Wikis rose to prominence in the early
2000s allowing everyone to edit and extend websites running on that technology by mastering a
very simple markup language. Wikis have been the
harbinger of a democratization of web publishing
that would eventually produce the largest collabo-

134

Tomislav Medak

rative website on the internet — the Wikipedia, as
well as a number of other collaborative platforms.
Monoskop.org embraces the encyclopedic spirit of
Wikipedia, focusing on its own specific topical and
topological interests. However, from its earliest days
Monoskop.org has also developed as a form of index
that maps out places, people, artworks, movements,
events and venues that compose the dense network
of European avant-gardes and media art.
If we take the index as a formalization of cross-referential relations between names of people, titles
of works and concepts that exist in the books and
across the books, what emerges is a model of a relational database reflecting the rich mesh of cultural
networks. Each book can serve as an index linking
its text to people, other books, segments in them.
To provide a paradigmatic demonstration of that
idea, Monoskop.org has assembled an index of all
persons in Friedrich Kittler’s Discourse Networks,
with each index entry linking both to its location
in the digital version of the book displayed on the
aaaaarg.org archive and to relevant resources for
those persons on the Monoskop.org and the internet. Hence, each object in the library, an index
in its own right, potentially allows one to initiate
the relational re-classification and re-organization
of all other works in the library through linkable
information.
Fundamental to the works of the post-socialist
retro-avant-gardes of the last couple of decades has
been the re-writing of a history of art in reverse.
In the works of IRWIN, Laibach or Mladen Stilinović, or comparable work of Komar & Melamid,

The Future After the Library

135

totalizing modernity is detourned by re-appropriating the forms of visual representation and classification that the institutions of modernity used to
construct a linear historical narrative of evolutions
and breaks in the 19th and 20th century. Genealogical
tables, events, artifacts and discourses of the past
were re-enacted, over-affirmed and displaced to
open up the historic past relegated to the archives
to an understanding that transformed the present
into something radically uncertain. The efforts of
Monoskop.org in digitizing of the artifacts of the
20th century avant-gardes and playing with the
epistemic tools of early book culture is a parallel
gesture, with a technological twist. If big data and
the control over information flows of today increasingly naturalizes and re-affirms the 19th century
positivist assumptions of the steerablity of society,
then the endlessly recombinant relations and affiliations between cultural objects threaten to overflow
that recurrent epistemic framework of modernity’s
barbarism in its cybernetic form.
The institution of the public library finds itself
today under a double attack. One unleashed by
the dismantling of the institutionalized forms of
social redistribution and solidarity. The other by
the commodifying forces of expanding copyright
protections and digital rights management, control
over the data flows and command over the classification and order of information. In a world of
collapsing planetary boundaries and unequal development, those who control the epistemic order

136

Tomislav Medak

control the future.08 The Googles and the NSAs run
on capturing totality — the world’s knowledge and
communication made decipherable, organizable and
controllable. The instabilities of the epistemic order
that the library continues to instigate at its margins
contributes to keeping the future open beyond the
script of ‘commodify and control’. In their acts of
re-appropriation UbuWeb and Monoskop.org are
but a reminder of the resilience of libraries’ instability that signals toward a future that can be made
radically open. ❧

08 In his article “Controlling the Future—Edward Snowden and
the New Era on Earth”, (accessed April 13, 2015, http://www.
eurozine.com/articles/2014-12-19-altvater-en.html), Elmar
Altvater makes a comparable argument that the efforts of
the “Five Eyes” to monitor the global communication flows,
revealed by Edward Snowden, and the control of the future
social development defined by the urgency of mitigating the
effects of the planetary ecological crisis cannot be thought
apart.

The Future After the Library

137

138

public library

http://kok.memoryoftheworld.org

139

Public Library
www.memoryoftheworld.org

Publishers
What, How & for Whom / WHW
Slovenska 5/1 • HR-10000 Zagreb
+385 (0) 1 3907261
whw@whw.hr • www.whw.hr
ISBN 978-953-55951-3-7 [Što, kako i za koga/WHW]
Multimedia Institute
Preradovićeva 18 • HR-10000 Zagreb
+385 (0)1 4856400
mi2@mi2.hr • www.mi2.hr
ISBN 978-953-7372-27-9 [Multimedijalni institut]
Editors
Tomislav Medak • Marcell Mars • What, How & for Whom / WHW
Copy Editor
Dušanka Profeta [Croatian]
Anthony Iles [English]
Translations
Una Bauer
Tomislav Medak
Dušanka Profeta
W. Boyd Rayward
Design & layout
Dejan Kršić @ WHW
Typography
MinionPro [robert slimbach • adobe]

English translation of the Paul
Otlet’s text published with the permission of W. Boyd
Rayward. The translation was originally published as
Paul Otlet, “Transformations in the Bibliographical
Apparatus of the Sciences: Repertory–Classification–Office
of Documentation”, in International Organisation and
Dissemination of Knowledge; Selected Essays of Paul Otlet,
translated and edited by W. Boyd Rayward, Amsterdam:
Elsevier, 1990: 148–156. ❧
format / size
120 × 200 mm
pages
144
Paper
Agrippina 120 g • Rives Laid 300 g
Printed by
Tiskara Zelina d.d.
Print Run
1000
Price
50 kn
May • 2015

This publication, realized along with the exhibition
Public Library in Gallery Nova, Zagreb 2015, is a part of
the collaborative project This Is Tomorrow. Back to Basics:
Forms and Actions in the Future organized by What, How
& for Whom / WHW, Zagreb, Tensta Konsthall, Stockholm
and Latvian Center for Contemporary Art / LCCA, Riga, as a
part of the book edition Art As Life As Work As Art. ❧

Supported by
Office of Culture, Education and Sport of the City of Zagreb
Ministry of Culture of the Republic of Croatia
Croatian Government Office for Cooperation with NGOs
Creative Europe Programme of the European Commission.
National Foundation for Civil Society Development
Kultura Nova Foundation

This project has been funded with support
from European Commision. This publication reflects
the views only of the authors, and the Commission
cannot be held responsible for any use which may be
made of the information contained therein. ❧
Publishing of this book is enabled by financial support of
the National Foundation for Civil Society Development.
The content of the publication is responsibility of
its authors and as such does not necessarily reflect
the views of the National Foundation. ❧
This project is financed
by the Croatian Government Office for Cooperation
with NGOs. The views expressed in this publication
are the sole responsibility of the publishers. ❧

This book is licensed under a Creative
Commons Attribution–ShareAlike 4.0
International License. ❧

Public Library

may • 2015
price 50 kn



tactics in Stalder 2018


Stalder
The Digital Condition
2018


---
lang: en
title: The Digital Condition
---

::: {.figure}
[]{#coverstart}

![Cover page](images/cover.jpg)
:::

Table of Contents

1. [Preface to the English Edition](#fpref)
2. [Acknowledgments](#ack)
3. [Introduction: After the End of the Gutenberg Galaxy](#cintro)
1. [Notes](#f6-ntgp-9999)
4. [I: Evolution](#c1)
1. [The Expansion of the Social Basis of Culture](#c1-sec-0002)
2. [The Culturalization of the World](#c1-sec-0006)
3. [The Technologization of Culture](#c1-sec-0009)
4. [From the Margins to the Center of Society](#c1-sec-0013)
5. [Notes](#c1-ntgp-9999)
5. [II: Forms](#c2)
1. [Referentiality](#c2-sec-0002)
2. [Communality](#c2-sec-0009)
3. [Algorithmicity](#c2-sec-0018)
4. [Notes](#c2-ntgp-9999)
6. [III: Politics](#c3)
1. [Post-democracy](#c3-sec-0002)
2. [Commons](#c3-sec-0011)
3. [Against a Lack of Alternatives](#c3-sec-0017)
4. [Notes](#c3-ntgp-9999)

[Preface to the English Edition]{.chapterTitle} {#fpref}

::: {.section}
This book posits that we in the societies of the (transatlantic) West
find ourselves in a new condition. I call it "the digital condition"
because it gained its dominance as computer networks became established
as the key infrastructure for virtually all aspects of life. However,
the emergence of this condition pre-dates computer networks. In fact, it
has deep historical roots, some of which go back to the late nineteenth
century, but it really came into being after the late 1960s. As many of
the cultural and political institutions shaped by the previous condition
-- which McLuhan called the Gutenberg Galaxy -- fell into crisis, new
forms of personal and collective orientation and organization emerged
which have been shaped by the affordances of this new condition. Both
the historical processes which unfolded over a very long time and the
structural transformation which took place in a myriad of contexts have
been beyond any deliberate influence. Although obviously caused by
social actors, the magnitude of such changes was simply too great, too
distributed, and too complex to be attributed to, or molded by, any
particular (set of) actor(s).

Yet -- and this is the core of what motivated me to write this book --
this does not mean that we have somehow moved beyond the political,
beyond the realm in which identifiable actors and their projects do
indeed shape our collective []{#Page_vii type="pagebreak"
title="vii"}existence, or that there are no alternatives to future
development already expressed within contemporary dynamics. On the
contrary, we can see very clearly that as the center -- the established
institutions shaped by the affordances of the previous condition -- is
crumbling, more economic and political projects are rushing in to fill
that void with new institutions that advance their competing agendas.
These new institutions are well adapted to the digital condition, with
its chaotic production of vast amounts of information and innovative
ways of dealing with that.

From this, two competing trajectories have emerged which are
simultaneously transforming the space of the political. First, I used
the term "post-democracy" because it expands possibilities, and even
requirements, of (personal) participation, while ever larger aspects of
(collective) decision-making are moved to arenas that are structurally
disconnected from those of participation. In effect, these arenas are
forming an authoritarian reality in which a small elite is vastly
empowered at the expense of everyone else. The purest incarnation of
this tendency can be seen in the commercial social mass media, such as
Facebook, Google, and the others, as they were newly formed in this
condition and have not (yet) had to deal with the complications of
transforming their own legacy.

For the other trajectory, I applied the term "commons" because it
expands both the possibilities of personal participation and agency, and
those of collective decision-making. This tendency points to a
redefinition of democracy beyond the hollowed-out forms of political
representation characterizing the legacy institutions of liberal
democracy. The purest incarnation of this tendency can be found in the
institutions that produce the digital commons, such as Wikipedia and the
various Free Software communities whose work has been and still is
absolutely crucial for the infrastructural dimensions of the digital
networks. They are the most advanced because, again, they have not had
to deal with institutional legacies. But both tendencies are no longer
confined to digital networks and are spreading across all aspects of
social life, creating a reality that is, on the structural level,
surprisingly coherent and, on the social and political level, full of
contradictions and thus opportunities.[]{#Page_viii type="pagebreak"
title="viii"}

I traced some aspects of these developments right up to early 2016, when
the German version of this book went into production. Since then a lot
has happened, but I resisted the temptation to update the book for the
English translation because ideas are always an expression of their
historical moment and, as such, updating either turns into a completely
new version or a retrospective adjustment of the historical record.

What has become increasingly obvious during 2016 and into 2017 is that
central institutions of liberal democracy are crumbling more quickly and
dramatically than was expected. The race to replace them has kicked into
high gear. The main events driving forward an authoritarian renewal of
politics took place on a national level, in particular the vote by the
UK to leave the EU (Brexit) and the election of Donald Trump to the
office of president of the United States of America. The main events
driving the renewal of democracy took place on a metropolitan level,
namely the emergence of a network of "rebel cities," led by Barcelona
and Madrid. There, community-based social movements established their
candidates in the highest offices. These cities are now putting in place
practical examples that other cities could emulate and adapt. For the
concerns of this book, the most important concept put forward is that of
"technological sovereignty": to bring the technological infrastructure,
and its developmental potential, back under the control of those who are
using it and are affected by it; that is, the citizens of the
metropolis.

Over the last 18 months, the imbalances between the two trajectories
have become even more extreme because authoritarian tendencies and
surveillance capitalism have been strengthened more quickly than the
commons-oriented practices could establish themselves. But it does not
change the fact that there are fundamental alternatives embedded in the
digital condition. Despite structural transformations that affect how we
do things, there is no inevitability about what we want to do
individually and, even more importantly, collectively.

::: {.poem}
::: {.lineGroup}
Zurich/Vienna, July 2017[]{#Page_ix type="pagebreak" title="ix"}
:::
:::
:::

[Acknowledgments]{.chapterTitle} {#ack}

::: {.section}
While it may be conventional to cite one person as the author of a book,
writing is a process with many collective elements. This book in
particular draws upon many sources, most of which I am no longer able to
acknowledge with any certainty. Far too often, important references came
to me in parenthetical remarks, in fleeting encounters, during trips, at
the fringes of conferences, or through discussions of things that,
though entirely new to me, were so obvious to others as not to warrant
any explication. Often, too, my thinking was influenced by long
conversations, and it is impossible for me now to identify the precise
moments of inspiration. As far as the themes of this book are concerned,
four settings were especially important. The international discourse
network "nettime," which has a mailing list of 4,500 members and which I
have been moderating since the late 1990s, represents an inexhaustible
source of internet criticism and, as a collaborative filter, has enabled
me to follow a wide range of developments from a particular point of
view. I am also indebted to the Zurich University of the Arts, where I
have taught for more than 10 years and where the students have been
willing to explain to me, again and again, what is already self-evident
to them. Throughout my time there, I have been able to observe a
dramatic shift. For today\'s students, the "new" is no longer new but
simply obvious, whereas they []{#Page_x type="pagebreak" title="x"}have
experienced many things previously regarded as normal -- such as
checking out a book from a library (instead of downloading it) -- as
needlessly complicated. In Vienna, the hub of my life, the World
Information Institute has for many years provided a platform for
conferences, publications, and interventions that have repeatedly raised
the stakes of the discussion and have brought together the most
interesting range of positions without regard to any disciplinary
boundaries. Housed in Vienna, too, is the Technopolitics Project, a
non-institutionalized circle of researchers and artists whose
discussions of techno-economic paradigms have informed this book in
fundamental ways and which has offered multiple opportunities for me to
workshop inchoate ideas.

Not everything, however, takes place in diffuse conversations and
networks. I was also able to rely on the generous support of several
individuals who, at one stage or another, read through, commented upon,
and made crucial improvements to the manuscript: Leonhard Dobusch,
Günther Hack, Katja Meier, Florian Cramer, Cornelia Sollfrank, Beat
Brogle, Volker Grassmuck, Ursula Stalder, Klaus Schönberger, Konrad
Becker, Armin Medosch, Axel Stockburger, and Gerald Nestler. Special
thanks are owed to Rebina Erben-Hartig, who edited the original German
manuscript and greatly improved its readability. I am likewise grateful
to Heinrich Greiselberger and Christian Heilbronn of the Suhrkamp
Verlag, whose faith in the book never wavered despite several delays.
Regarding the English version at hand, it has been a privilege to work
with a translator as skillful as Valentine Pakis. Over the past few
years, writing this book might have been the most import­ant project in
my life had it not been for Andrea Mayr. In this regard, I have been
especially fortunate.[]{#Page_xi type="pagebreak"
title="xi"}[]{#Page_xii type="pagebreak" title="xii"}
:::

Introduction [After the End of the Gutenberg Galaxy]{.chapterTitle} []{.chapterSubTitle} {#cintro}

::: {.section}
The show had already been going on for more than three hours, but nobody
was bothered by this. Quite the contrary. The tension in the venue was
approaching its peak, and the ratings were through the roof. Throughout
all of Europe, 195 million people were watching the spectacle on
television, and the social mass media were gaining steam. On Twitter,
more than 47,000 messages were being sent every minute with the hashtag
\#Eurovision.[^1^](#f6-note-0001){#f6-note-0001a} The outcome was
decided shortly after midnight: Conchita Wurst, the bearded diva, was
announced the winner of the 2014 Eurovision Song Contest. Cheers erupted
as the public celebrated the victor -- but also itself. At long last,
there was more to the event than just another round of tacky television
programming ("This is Ljubljana calling!"). Rather, a statement was made
-- a statement in favor of tolerance and against homophobia, for
diversity and for the right to define oneself however one pleases. And
Europe sent this message in the midst of a crisis and despite ongoing
hostilities, not to mention all of the toxic rumblings that could be
heard about decadence, cultural decay, and Gayropa. Visibly moved, the
Austrian singer let out an exclamation -- "We are unity, and we are
unstoppable!" -- as she returned to the stage with wobbly knees to
accept the trophy.

With her aesthetically convincing performance, Conchita succeeded in
unleashing a strong desire for personal []{#Page_1 type="pagebreak"
title="1"}self-discovery, for community, and for overcoming stale
conventions. And she did this through a character that mainstream
society would have considered paradoxical and deviant not long ago but
has since come to understand: attractive beyond the dichotomy of man and
woman, explicitly artificial and yet entirely authentic. This peculiar
conflation of artificiality and naturalness is equally present in
Berndnaut Smilde\'s photographic work of a real indoor cloud (*Nimbus*,
2010) on the cover of this book. Conchita\'s performance was also on a
formal level seemingly paradoxical: extremely focused and completely
open. Unlike most of the other acts, she took the stage alone, and
though she hardly moved at all, she nevertheless incited the audience to
participate in numerous ways and genuinely to act out the motto of the
contest ("Join us!"). Throughout the early rounds of the competition,
the beard, which was at first so provocative, transformed into a
free-floating symbol that the public began to appropriate in various
ways. Men and women painted Conchita-like beards on their faces,
newspapers printed beards to be cut out, and fans crocheted beards. Not
only did someone Photoshop a beard on to a painting of Empress Sissi of
Austria, but King Willem-Alexander of the Netherlands even tweeted a
deceptively realistic portrait of his wife, Queen Máxima, wearing a
beard. From one of the biggest stages of all, the evening of Wurst\'s
victory conveyed an impression of how much the culture of Europe had
changed in recent years, both in terms of its content and its forms.
That which had long been restricted to subcultural niches -- the
fluidity of gender iden­tities, appropriation as a cultural technique,
or the conflation of reception and production, for instance -- was now
part of the mainstream. Even while sitting in front of the television,
this mainstream was no longer just a private audience but rather a
multitude of singular producers whose networked activity -- on location
or on social mass media -- lent particular significance to the occasion
as a moment of collective self-perception.

It is more than half a century since Marshall McLuhan announced the end
of the Modern era, a cultural epoch that he called the Gutenberg Galaxy
in honor of the print medium by which it was so influenced. What was
once just an abstract speculation of media theory, however, now
describes []{#Page_2 type="pagebreak" title="2"}the concrete reality of
our everyday life. What\'s more, we have moved well past McLuhan\'s
diagnosis: the erosion of old cultural forms, institutions, and
certainties is not just something we affirm, but new ones have already
formed whose contours are easy to identify not only in niche sectors but
in the mainstream. Shortly before Conchita\'s triumph, Facebook thus
expanded the gender-identity options for its billion-plus users from 2
to 60. In addition to "male" and "female," users of the English version
of the site can now choose from among the following categories:

::: {.extract}
Agender, Androgyne, Androgynes, Androgynous, Asexual, Bigender, Cis, Cis
Female, Cis Male, Cis Man, Cis Woman, Cisgender, Cisgender Female,
Cisgender Male, Cisgender Man, Cisgender Woman, Female to Male (FTM),
Female to Male Trans Man, Female to Male Transgender Man, Female to Male
Transsexual Man, Gender Fluid, Gender Neutral, Gender Nonconforming,
Gender Questioning, Gender Variant, Genderqueer, Hermaphrodite,
Intersex, Intersex Man, Intersex Person, Intersex Woman, Male to Female
(MTF), Male to Female Trans Woman, Male to Female Transgender Woman,
Male to Female Transsexual Woman, Neither, Neutrois, Non-Binary, Other,
Pangender, Polygender, T\*Man, Trans, Trans Female, Trans Male, Trans
Man, Trans Person, Trans\*Female, Trans\*Male, Trans\*Man,
Trans\*Person, Trans\*Woman, Transexual, Transexual Female, Transexual
Male, Transexual Man, Transexual Person, Transexual Woman, Transgender
Female, Transgender Person, Transmasculine, T\*Woman, Two\*Person,
Two-Spirit, Two-Spirit Person.
:::

This enormous proliferation of cultural possibilities is an expression
of what I will refer to below as the digital condition. Far from being
universally welcomed, its growing presence has also instigated waves of
nostalgia, diffuse resentments, and intellectual panic. Conservative and
reactionary movements, which oppose such developments and desire to
preserve or even re-create previous conditions, have been on the rise.
Likewise in 2014, for instance, a cultural dispute broke out in normally
subdued Baden-Würtemberg over which forms of sexual partnership should
be mentioned positively in the sexual education curriculum. Its impetus
was a working paper released at the end of 2013 by the state\'s
[]{#Page_3 type="pagebreak" title="3"}Ministry of Culture. Among other
things, it proposed that adolescents "should confront their own sexual
identity and orientation \[...\] from a position of acceptance with
respect to sexual diversity."[^2^](#f6-note-0002){#f6-note-0002a} In a
short period of time, a campaign organized mainly through social mass
media collected more than 200,000 signatures in opposition to the
proposal and submitted them to the petitions committee at the state
parliament. At that point, the government responded by putting the
initiative on ice. However, according to the analysis presented in this
book, leaving it on ice creates a precarious situation.

The rise and spread of the digital condition is the result of a
wide-ranging and irreversible cultural transformation, the beginnings of
which can in part be traced back to the nineteenth century. Since the
1960s, however, this shift has accelerated enormously and has
encompassed increasingly broader spheres of social life. More and more
people have been participating in cultural processes; larger and larger
dimensions of existence have become battlegrounds for cultural disputes;
and social activity has been intertwined with increasingly complex
technologies, without which it would hardly be possible to conceive of
these processes, let alone achieve them. The number of competing
cultural projects, works, reference points, and reference systems has
been growing rapidly. This, in turn, has caused an escalating crisis for
the established forms and institutions of culture, which are poorly
equipped to deal with such an inundation of new claims to meaning. Since
roughly the year 2000, many previously independent developments have
been consolidating, gaining strength and modifying themselves to form a
new cultural constellation that encompasses broad segments of society --
a new galaxy, as McLuhan might have
said.[^3^](#f6-note-0003){#f6-note-0003a} These days it is relatively
easy to recognize the specific forms that characterize it as a whole and
how these forms have contributed to new, contradictory and
conflict-laden political dynamics.

My argument, which is restricted to cultural developments in the
(transatlantic) West, is divided into three chapters. In the first, I
will outline the *historical* developments that have given rise to this
quantitative and qualitative change and have led to the crisis faced by
the institutions of the late phase of the Gutenberg Galaxy, which
defined the last third []{#Page_4 type="pagebreak" title="4"}of the
twentieth century.[^4^](#f6-note-0004){#f6-note-0004a} The expansion of
the social basis of cultural processes will be traced back to changes in
the labor market, to the self-empowerment of marginalized groups, and to
the dissolution of centralized cultural geography. The broadening of
cultural fields will be discussed in terms of the rise of design as a
general creative discipline, and the growing significance of complex
technologies -- as fundamental components of everyday life -- will be
tracked from the beginnings of independent media up to the development
of the internet as a mass medium. These processes, which at first
unfolded on their own and may have been reversible on an individual
basis, are integrated today and represent a socially domin­ant component
of the coherent digital condition. From the perspective of cultural
studies and media theory, the second chapter will delineate the already
recognizable features of this new culture. Concerned above all with the
analysis of forms, its focus is thus on the question of "how" cultural
practices operate. It is only because specific forms of culture,
exchange, and expression are prevalent across diverse var­ieties of
content, social spheres, and locations that it is even possible to speak
of the digital condition in the singular. Three examples of such forms
stand out in particular. *Referentiality* -- that is, the use of
existing cultural materials for one\'s own production -- is an essential
feature of many methods for inscribing oneself into cultural processes.
In the context of unmanageable masses of shifting and semantically open
reference points, the act of selecting things and combining them has
become fundamental to the production of meaning and the constitution of
the self. The second feature that characterizes these processes is
*communality*. It is only through a collectively shared frame of
reference that meanings can be stabilized, possible courses of action
can be determined, and resources can be made available. This has given
rise to communal formations that generate self-referential worlds, which
in turn modulate various dimensions of existence -- from aesthetic
preferences to the methods of biological reproduction and the rhythms of
space and time. In these worlds, the dynamics of network power have
reconfigured notions of voluntary and involuntary behavior, autonomy,
and coercion. The third feature of the new cultural landscape is its
*algorithmicity*. It is characterized, in other []{#Page_5
type="pagebreak" title="5"}words, by automated decision-making processes
that reduce and give shape to the glut of information, by extracting
information from the volume of data produced by machines. This extracted
information is then accessible to human perception and can serve as the
basis of singular and communal activity. Faced with the enormous amount
of data generated by people and machines, we would be blind were it not
for algorithms.

The third chapter will focus on *political dimensions*. These are the
factors that enable the formal dimensions described in the preceding
chapter to manifest themselves in the form of social, political, and
economic projects. Whereas the first chapter is concerned with long-term
and irreversible histor­ical processes, and the second outlines the
general cultural forms that emerged from these changes with a certain
degree of inevitability, my concentration here will be on open-ended
dynamics that can still be influenced. A contrast will be made between
two political tendencies of the digital condition that are already quite
advanced: *post-democracy* and *commons*. Both take full advantage of
the possibilities that have arisen on account of structural changes and
have advanced them even further, though in entirely different
directions. "Post-democracy" refers to strategies that counteract the
enormously expanded capacity for social communication by disconnecting
the possibility to participate in things from the ability to make
decisions about them. Everyone is allowed to voice his or her opinion,
but decisions are ultimately made by a select few. Even though growing
numbers of people can and must take responsibility for their own
activity, they are unable to influence the social conditions -- the
social texture -- under which this activity has to take place. Social
mass media such as Facebook and Google will receive particular attention
as the most conspicuous manifestations of this tendency. Here, under new
structural provisions, a new combination of behavior and thought has
been implemented that promotes the normalization of post-democracy and
contributes to its otherwise inexplicable acceptance in many areas of
society. "Commons," on the contrary, denotes approaches for developing
new and comprehensive institutions that not only directly combine
participation and decision-making but also integrate economic, social,
and ethical spheres -- spheres that Modernity has tended to keep
apart.[]{#Page_6 type="pagebreak" title="6"}

Post-democracy and commons can be understood as two lines of development
that point beyond the current crisis of liberal democracy and represent
new political projects. One can be characterized as an essentially
authoritarian system, the other as a radical expansion and renewal of
democracy, from the notion of representation to that of participation.

Even though I have brought together a number of broad perspectives, I
have refrained from discussing certain topics that a book entitled *The
Digital Condition* might be expected to address, notably the matter of
copyright, for one example. This is easy to explain. As regards the new
forms at the heart of this book, none of these developments requires or
justifies copyright law in its present form. In any case, my thoughts on
the matter were published not long ago in another book, so there is no
need to repeat them here.[^5^](#f6-note-0005){#f6-note-0005a} The theme
of privacy will also receive little attention. This is not because I
share the view, held by proponents of "post-privacy," that it would be
better for all personal information to be made available to everyone. On
the contrary, this position strikes me as superficial and naïve. That
said, the political function of privacy -- to safeguard a degree of
personal autonomy from powerful institutions -- is based on fundamental
concepts that, in light of the developments to be described below,
urgently need to be updated. This is a task, however, that would take me
far beyond the scope of the present
book.[^6^](#f6-note-0006){#f6-note-0006a}

Before moving on to the first chapter, I should first briefly explain my
somewhat unorthodox understanding of the central concepts in the title
of the book -- "condition" and "digital." In what follows, the term
"condition" will be used to designate a cultural condition whereby the
processes of social meaning -- that is, the normative dimension of
existence -- are explicitly or implicitly negotiated and realized by
means of singular and collective activity. Meaning, however, does not
manifest itself in signs and symbols alone; rather, the practices that
engender it and are inspired by it are consolidated into artifacts,
institutions, and lifeworlds. In other words, far from being a symbolic
accessory or mere overlay, culture in fact directs our actions and gives
shape to society. By means of materialization and repetition, meaning --
both as claim and as reality -- is made visible, productive, and
negotiable. People are free to accept it, reject it, or ignore
[]{#Page_7 type="pagebreak" title="7"}it altogether. Social meaning --
that is, meaning shared by multiple people -- can only come about
through processes of exchange within larger or smaller formations.
Production and reception (to the extent that it makes any sense to
distinguish between the two) do not proceed linearly here, but rather
loop back and reciprocally influence one another. In such processes, the
participants themselves determine, in a more or less binding manner, how
they stand in relation to themselves, to each other, and to the world,
and they determine the frame of reference in which their activity is
oriented. Accordingly, culture is not something static or something that
is possessed by a person or a group, but rather a field of dispute that
is subject to the activities of multiple ongoing changes, each happening
at its own pace. It is characterized by processes of dissolution and
constitution that may be collaborative, oppositional, or simply
operating side by side. The field of culture is pervaded by competing
claims to power and mechanisms for exerting it. This leads to conflicts
about which frames of reference should be adopted for different fields
and within different social groups. In such conflicts,
self-determination and external determination interact until a point is
reached at which both sides are mutually constituted. This, in turn,
changes the conditions that give rise to shared meaning and personal
identity.

In what follows, this broadly post-structuralist perspective will inform
my discussion of the causes and formational conditions of cultural
orders and their practices. Culture will be conceived throughout as
something heterogeneous and hybrid. It draws from many sources; it is
motivated by the widest possible variety of desires, intentions, and
compulsions; and it mobilizes whatever resources might be necessary for
the constitution of meaning. This emphasis on the materiality of culture
is also reflected in the concept of the digital. Media are relational
technologies, which means that they facilitate certain types of
connection between humans and
objects.[^7^](#f6-note-0007){#f6-note-0007a} "Digital" thus denotes the
set of relations that, on the infrastructural basis of digital networks,
is realized today in the production, use, and transform­ation of
material and immaterial goods, and in the constitution and coordination
of personal and collective activity. In this regard, the focus is less
on the dominance of a certain class []{#Page_8 type="pagebreak"
title="8"}of technological artifacts -- the computer, for instance --
and even less on distinguishing between "digital" and "analog,"
"material" and "immaterial." Even in the digital condition, the analog
has not gone away. Rather, it has been re-evaluated and even partially
upgraded. The immaterial, moreover, is never entirely without
materiality. On the contrary, the fleeting impulses of digital
communication depend on global and unmistakably material infrastructures
that extend from mines beneath the surface of the earth, from which rare
earth metals are extracted, all the way into outer space, where
satellites are circling around above us. Such things may be ignored
because they are outside the experience of everyday life, but that does
not mean that they have disappeared or that they are of any less
significance. "Digital" thus refers to historically new possibilities
for constituting and connecting various human and non-human actors,
which is not limited to digital media but rather appears everywhere as a
relational paradigm that alters the realm of possibility for numerous
materials and actors. My understanding of the digital thus approximates
the concept of the "post-digital," which has been gaining currency over
the past few years within critical media cultures. Here, too, the
distinction between "new" and "old" media and all of the ideological
baggage associated with it -- for instance, that the new represents the
future while the old represents the past -- have been rejected. The
aesthetic projects that continue to define the image of the "digital" --
immateriality, perfection, and virtuality -- have likewise been
discarded.[^8^](#f6-note-0008){#f6-note-0008a} Above all, the
"post-digital" is a critical response to this techno-utopian aesthetic
and its attendant economic and political perspectives. According to the
cultural theorist Florian Cramer, the concept accommodates the fact that
"new ethical and cultural conventions which became mainstream with
internet communities and open-source culture are being retroactively
applied to the making of non-digital and post-digital media
products."[^9^](#f6-note-0009){#f6-note-0009a} He thus cites the trend
that process-based practices oriented toward open interaction, which
first developed within digital media, have since begun to appear in more
and more contexts and in an increasing number of
materials.[^10[]{#Page_9 type="pagebreak"
title="9"}^](#f6-note-0010){#f6-note-0010a}

For the historical, cultural-theoretical, and political perspectives
developed in this book, however, the concept of the post-digital is
somewhat problematic, for it requires the narrow context of media art
and its fixation on technology in order to become a viable
counter-position. Without this context, certain misunderstandings are
impossible to avoid. The prefix "post-," for instance, is often
interpreted in the sense that something is over or that we have at least
grasped the matters at hand and can thus turn to something new. The
opposite is true. The most enduringly relevant developments are only now
beginning to adopt a specific form, long after digital infrastructures
and the practices made popular by them have become part of our everyday
lives. Or, as the communication theorist and consultant Clay Shirky puts
it, "Communication tools don\'t get socially interesting until they get
technologically boring."[^11^](#f6-note-0011){#f6-note-0011a} For it is
only today, now that our fascination for this technology has waned and
its promises sound hollow, that culture and society are being defined by
the digital condition in a comprehensive sense. Before, this was the
case in just a few limited spheres. It is this hybridization and
solidification of the digital -- the presence of the digital beyond
digital media -- that lends the digital condition its dominance. As to
the concrete realities in which these things will materialize, this is
currently being decided in an open and ongoing process. The aim of this
book is to contribute to our understanding of this process.[]{#Page_10
type="pagebreak" title="10"}
:::

::: {.section .notesSet type="rearnotes"}
[]{#notesSet}Notes {#f6-ntgp-9999}
------------------

::: {.section .notesList}
[1](#f6-note-0001a){#f6-note-0001}  Dan Biddle, "Five Million Tweets for
\#Eurovision 2014," *Twitter UK* (May 11, 2014), online.

[2](#f6-note-0002a){#f6-note-0002}  Ministerium für Kultus, Jugend und
Sport -- Baden-Württemberg, "Bildungsplanreform 2015/2016 -- Verankerung
von Leitprinzipien," online \[--trans.\].

[3](#f6-note-0003a){#f6-note-0003}  As early as 1995, Wolfgang Coy
suggested that McLuhan\'s metaphor should be supplanted by the concept
of the "Turing Galaxy," but this never caught on. See his introduction
to the German edition of *The Gutenberg Galaxy*: "Von der Gutenbergschen
zur Turingschen Galaxis: Jenseits von Buchdruck und Fernsehen," in
Marshall McLuhan, *Die Gutenberg Galaxis: Das Ende des Buchzeitalters*,
(Cologne: Addison-Wesley, 1995), pp. vii--xviii.[]{#Page_176
type="pagebreak" title="176"}

[4](#f6-note-0004a){#f6-note-0004}  According to the analysis of the
Spanish sociologist Manuel Castells, this crisis began almost
simultaneously in highly developed capitalist and socialist societies,
and it did so for the same reason: the paradigm of "industrialism" had
reached the limits of its productivity. Unlike the capitalist societies,
which were flexible enough to tame the crisis and reorient their
economies, the socialism of the 1970s and 1980s experienced stagnation
until it ultimately, in a belated effort to reform, collapsed. See
Manuel Castells, *End of Millennium*, 2nd edn (Oxford: Wiley-Blackwell,
2010), pp. 5--68.

[5](#f6-note-0005a){#f6-note-0005}  Felix Stalder, *Der Autor am Ende
der Gutenberg Galaxis* (Zurich: Buch & Netz, 2014).

[6](#f6-note-0006a){#f6-note-0006}  For my preliminary thoughts on this
topic, see Felix Stalder, "Autonomy and Control in the Era of
Post-Privacy," *Open: Cahier on Art and the Public Domain* 19 (2010):
78--86; and idem, "Privacy Is Not the Antidote to Surveillance,"
*Surveillance & Society* 1 (2002): 120--4. For a discussion of these
approaches, see the working paper by Maja van der Velden, "Personal
Autonomy in a Post-Privacy World: A Feminist Technoscience Perspective"
(2011), online.

[7](#f6-note-0007a){#f6-note-0007}  Accordingly, the "new social" media
are mass media in the sense that they influence broadly disseminated
patterns of social relations and thus shape society as much as the
traditional mass media had done before them.

[8](#f6-note-0008a){#f6-note-0008}  Kim Cascone, "The Aesthetics of
Failure: 'Post-Digital' Tendencies in Contemporary Computer Music,"
*Computer Music Journal* 24/2 (2000): 12--18.

[9](#f6-note-0009a){#f6-note-0009}  Florian Cramer, "What Is
'Post-Digital'?" *Post-Digital Research* 3 (2014), online.

[10](#f6-note-0010a){#f6-note-0010}  In the field of visual arts,
similar considerations have been made regarding "post-internet art." See
Artie Vierkant, "The Image Object Post-Internet,"
[jstchillin.org](http://jstchillin.org) (December 2010), online; and Ian
Wallace, "What Is Post-Internet Art? Understanding the Revolutionary New
Art Movement," *Artspace* (March 18, 2014), online.

[11](#f6-note-0011a){#f6-note-0011}  Clay Shirky, *Here Comes Everybody:
The Power of Organizing without Organizations* (New York: Penguin,
2008), p. 105.
:::
:::

[I]{.chapterNumber} [Evolution]{.chapterTitle} {#c1}
=
::: {.section}
Many authors have interpreted the new cultural realities that
characterize our daily lives as a direct consequence of technological
developments: the internet is to blame! This assumption is not only
empirically untenable; it also leads to a problematic assessment of the
current situation. Apparatuses are represented as "central actors," and
this suggests that new technologies have suddenly revolutionized a
situation that had previously been stable. Depending on one\'s point of
view, this is then regarded as "a blessing or a
curse."[^1^](#c1-note-0001){#c1-note-0001a} A closer examination,
however, reveals an entirely different picture. Established cultural
practices and social institutions had already been witnessing the
erosion of their self-evident justification and legitimacy, long before
they were faced with new technologies and the corresponding demands
these make on individuals. Moreover, the allegedly new types of
coordination and cooperation are also not so new after all. Many of them
have existed for a long time. At first most of them were totally
separate from the technologies for which, later on, they would become
relevant. It is only in retrospect that these developments can be
identified as beginnings, and it can be seen that much of what we regard
today as novel or revolutionary was in fact introduced at the margins of
society, in cultural niches that were unnoticed by the dominant actors
and institutions. The new technologies thus evolved against a
[]{#Page_11 type="pagebreak" title="11"}background of processes of
societal transformation that were already under way. They could only
have been developed once a vision of their potential had been
formulated, and they could only have been disseminated where demand for
them already existed. This demand was created by social, political, and
economic crises, which were themselves initiated by changes that were
already under way. The new technologies seemed to provide many differing
and promising answers to the urgent questions that these crises had
prompted. It was thus a combination of positive vision and pressure that
motivated a great variety of actors to change, at times with
considerable effort, the established processes, mature institutions, and
their own behavior. They intended to appropriate, for their own
projects, the various and partly contradictory possibilities that they
saw in these new technologies. Only then did a new technological
infrastructure arise.

This, in turn, created the preconditions for previously independent
developments to come together, strengthening one another and enabling
them to spread beyond the contexts in which they had originated. Thus,
they moved from the margins to the center of culture. And by
intensifying the crisis of previously established cultural forms and
institutions, they became dominant and established new forms and
institutions of their own.
:::

::: {.section}
The Expansion of the Social Basis of Culture {#c1-sec-0002}
--------------------------------------------

Watching television discussions from the 1950s and 1960s today, one is
struck not only by the billows of cigarette smoke in the studio but also
by the homogeneous spectrum of participants. Usually, it was a group of
white and heteronormatively behaving men speaking with one
another,[^2^](#c1-note-0002){#c1-note-0002a} as these were the people
who held the important institutional positions in the centers of the
West. As a rule, those involved were highly specialized representatives
from the cultural, economic, scientific, and political spheres. Above
all, they were legitimized to appear in public to articulate their
opinions, which were to be regarded by others as relevant and worthy of
discussion. They presided over the important debates of their time. With
few exceptions, other actors and their deviant opinions -- there
[]{#Page_12 type="pagebreak" title="12"}has never been a time without
them -- were either not taken seriously at all or were categorized as
indecent, incompetent, perverse, irrelevant, backward, exotic, or
idiosyncratic.[^3^](#c1-note-0003){#c1-note-0003a} Even at that time,
the social basis of culture was beginning to expand, though the actors
at the center of the discourse had failed to notice this. Communicative
and cultural pro­cesses were gaining significance in more and more
places, and excluded social groups were self-consciously developing
their own language in order to intervene in the discourse. The rise of
the knowledge economy, the increasingly loud critique of
heteronormativity, and a fundamental cultural critique posed by
post-colonialism enabled a greater number of people to participate in
public discussions. In what follows, I will subject each of these three
phenomena to closer examin­ation. In order to do justice to their
complexity, I will treat them on different levels: I will depict the
rise of the knowledge economy as a structural change in labor; I will
reconstruct the critique of heteronormativity by outlining the origins
and transformations of the gay movement in West Germany; and I will
discuss post-colonialism as a theory that introduced new concepts of
cultural multiplicity and hybridization -- concepts that are now
influencing the digital condition far beyond the limits of the
post-colonial discourse, and often without any reference to this
discourse at all.

::: {.section}
### The growth of the knowledge economy {#c1-sec-0003}

At the beginning of the 1950s, the Austrian-American economist Fritz
Machlup was immersed in his study of the polit­ical economy of
monopoly.[^4^](#c1-note-0004){#c1-note-0004a} Among other things, he was
concerned with patents and copyright law. In line with the neo-classical
Austrian School, he considered both to be problematic (because
state-created) monopolies.[^5^](#c1-note-0005){#c1-note-0005a} The
longer he studied the monopoly of the patent system in particular, the
more far-reaching its consequences seemed to him. He maintained that the
patent system was intertwined with something that might be called the
"economy of invention" -- ultimately, patentable insights had to be
produced in the first place -- and that this was in turn part of a much
larger economy of knowledge. The latter encompassed government agencies
as well as institutions of education, research, and development
[]{#Page_13 type="pagebreak" title="13"}(that is, schools, universities,
and certain corporate laboratories), which had been increasing steadily
in number since Roosevelt\'s New Deal. Yet it also included the
expanding media sector and those industries that were responsible for
providing technical infrastructure. Machlup subsumed all of these
institutions and sectors under the concept of the "knowledge economy," a
term of his own invention. Their common feature was that essential
aspects of their activities consisted in communicating things to other
people ("telling anyone anything," as he put it). Thus, the employees
were not only recipients of information or instructions; rather, in one
way or another, they themselves communicated, be it merely as a
secretary who typed up, edited, and forwarded a piece of shorthand
dictation. In his book *The Production and Distribution of Knowledge in
the United States*, published in 1962, Machlup gathered empirical
material to demonstrate that the American economy had entered a new
phase that was distinguished by the production, exchange, and
application of abstract, codified
knowledge.[^6^](#c1-note-0006){#c1-note-0006a} This opinion was no
longer entirely novel at the time, but it had never before been
presented in such an empirically detailed and comprehensive
manner.[^7^](#c1-note-0007){#c1-note-0007a} The extent of the knowledge
economy surprised Machlup himself: in his book, he concluded that as
much as 43 percent of all labor activity was already engaged in this
sector. This high number came about because, until then, no one had put
forward the idea of understanding such a variety of activities as a
single unit.

Machlup\'s categorization was indeed quite innovative, for the dynamics
that propelled the sectors that he associated with one another not only
were very different but also had originated as an integral component in
the development of the industrial production of goods. They were more of
an extension of such production than a break with it. The production and
circulation of goods had been expanding and accelerating as early as the
nineteenth century, though at highly divergent rates from one region or
sector to another. New markets were created in order to distribute goods
that were being produced in greater numbers; new infrastructure for
transportation and communication was established in order to serve these
large markets, which were mostly in the form of national territories
(including their colonies). This []{#Page_14 type="pagebreak"
title="14"}enabled even larger factories to be built in order to
exploit, to an even greater extent, the cost advantages of mass
production. In order to control these complex processes, new professions
arose with different types of competencies and working conditions. The
office became a workplace for an increasing number of people -- men and
women alike -- who, in one form or another, had something to do with
information processing and communication. Yet all of this required not
only new management techniques. Production and products also became more
complex, so that entire corporate sectors had to be restructured.
Whereas the first decisive inventions of the industrial era were still
made by more or less educated tinkerers, during the last third of the
nineteenth century, invention itself came to be institutionalized. In
Germany, Siemens (founded in 1847 as the Telegraphen-Bauanstalt von
Siemens & Halske) exemplifies this transformation. Within 50 years, a
company that began in a proverbial workshop in a Berlin backyard became
a multinational high-tech corporation. It was in such corporate
laboratories, which were established around the year 1900, that the
"industrialization of invention" or the "scientification of industrial
production" took place.[^8^](#c1-note-0008){#c1-note-0008a} In other
words, even the processes employed in factories and the goods that they
produced became knowledge-intensive. Their invention, planning, and
production required a steadily growing expansion of activities, which
today we would refer to as research and development. The informatization
of the economy -- the acceleration of mass production, the comprehensive
application of scientific methods to the organization of labor, and the
central role of research and development in industry -- was hastened
enormously by a world war that was waged on an industrial scale to an
extent that had never been seen before.

Another important factor for the increasing significance of the
knowledge economy was the development of the consumer society. Over the
course of the last third of the nineteenth century, despite dramatic
regional and social disparities, an increasing number of people profited
from the economic growth that the Industrial Revolution had instigated.
Wages increased and basic needs were largely met, so that a new social
stratum arose, the middle class, which was able to spend part of its
income on other things. But on what? First, []{#Page_15 type="pagebreak"
title="15"}new needs had to be created. The more production capacities
increased, the more they had to be rethought in terms of consumption.
Thus, in yet another way, the economy became more knowledge-intensive.
It was now necessary to become familiar with, understand, and stimulate
the interests and preferences of consumers, in order to entice them to
purchase products that they did not urgently need. This knowledge did
little to enhance the material or logistical complexity of goods or
their production; rather, it was reflected in the increasingly extensive
communication about and through these goods. The beginnings of this
development were captured by Émile Zola in his 1883 novel *The Ladies\'
Paradise*, which was set in the new world of a semi-fictitious
department store bearing that name. In its opening scene, the young
protagonist Denise Baudu and her brother Jean, both of whom have just
moved to Paris from a provincial town, encounter for the first time the
artfully arranged women\'s clothing -- exhibited with all sorts of
tricks involving lighting, mirrors, and mannequins -- in the window
displays of the store. The sensuality of the staged goods is so
overwhelming that both of them are not only struck dumb, but Jean even
blushes.
It was the economy of affects that brought blood to Jean\'s cheeks. At
that time, strategies for attracting the attention of customers did not
yet have a scientific and systematic basis. Just as the first inventions
in the age of industrialization were made by amateurs, so too was the
economy of affects developed intuitively and gradually rather than as a
planned or conscious paradigm shift. That it was possible to induce and
direct affects by means of targeted communication was the pioneering
discovery of the Austrian-American Edward Bernays. During the 1920s, he
combined the ideas of his uncle Sigmund Freud about unconscious
motivations with the sociological research methods of opinion surveys to
form a new discipline: market
research.[^9^](#c1-note-0009){#c1-note-0009a} It became the scientific
basis of a new field of activity, which he at first called "propa­ganda"
but then later referred to as "public
relations."[^10^](#c1-note-0010){#c1-note-0010a} Public communication,
be it for economic or political ends, was now placed on a systematic
foundation that came to distance itself more and more from the pure
"conveyance of information." Communication became a strategic field for
corporate and political disputes, and the mass media []{#Page_16
type="pagebreak" title="16"}became their locus of negotiation. Between
1880 and 1917, for instance, commercial advertising costs in the United
States increased by more than 800 percent, and the leading advertising
firms, using the same techniques with which they attracted consumers to
products, were successful in selling to the American public the idea of
their nation entering World War I. Thus, a media industry in the modern
sense was born, and it expanded along with the rapidly growing market
for advertising.[^11^](#c1-note-0011){#c1-note-0011a}

In his studies of labor markets conducted at the beginning of the 1960s,
Machlup brought these previously separ­ate developments together and
thus explained the existence of an already advanced knowledge economy in
the United States. His arguments fell on extremely fertile soil, for an
intellectual transformation had taken place in other areas of science as
well. A few years earlier, for instance, cybernetics had given the
concepts "information" and "communication" their first scientifically
precise (if somewhat idiosyncratic) definitions and had assigned to them
a position of central importance in all scientific disciplines, not to
mention life in general.[^12^](#c1-note-0012){#c1-note-0012a} Machlup\'s
investigation seemed to confirm this in the case of the economy, given
that the knowledge economy was primarily concerned with information and
communication. Since then, numerous analyses, formulas, and slogans have
repeated, modified, refined, and criticized the idea that the
knowledge-based activities of the economy have become increasingly
important. In the 1970s this discussion was associated above all with
the notion of the "post-industrial
society,"[^13^](#c1-note-0013){#c1-note-0013a} in the 1980s the guiding
idea was the "information society,"[^14^](#c1-note-0014){#c1-note-0014a}
and in the 1990s the debate revolved around the "network
society"[^15^](#c1-note-0015){#c1-note-0015a} -- to name just the most
popular concepts. What these approaches have in common is that they each
diagnose a comprehensive societal transformation that, as regards the
creation of economic value or jobs, has shifted the balance from
productive to communicative activ­ities. Accordingly, they presuppose
that we know how to distinguish the former from the latter. This is not
unproblematic, however, because in practice the two are usually tightly
intertwined. Moreover, whoever maintains that communicative activities
have taken the place of industrial production in our society has adopted
a very narrow point of []{#Page_17 type="pagebreak" title="17"}view.
Factory jobs have not simply disappeared; they have just been partially
relocated outside of Western economies. The assertion that communicative
activities are somehow of "greater value" hardly chimes with the reality
of today\'s new "service jobs," many of which pay no more than the
minimum wage.[^16^](#c1-note-0016){#c1-note-0016a} Critiques of this
sort, however, have done little to reduce the effectiveness of this
analysis -- especially its political effectiveness -- for it does more
than simply describe a condition. It also contains a set of political
instructions that imply or directly demand that precisely those sectors
should be promoted that it considers economically promising, and that
society should be reorganized accordingly. Since the 1970s, there has
thus been a feedback loop between scientific analysis and political
agendas. More often than not, it is hardly possible to distinguish
between the two. Especially in Britain and the United States, the
economic transformation of the 1980s was imposed insistently and with
political calculation (the weakening of labor unions).

There are, however, important differences between the developments of
the so-called "post-industrial society" of the 1970s and those of the
so-called "network society" of the 1990s, even if both terms are
supposed to stress the increased significance of information, knowledge,
and communication. With regard to the digital condition, the most
important of these differences are the greater flexibility of economic
activity in general and employment relations in particular, as well as
the dismantling of social security systems. Neither phenomenon played
much of a role in analyses of the early 1970s. The development since
then can be traced back to two currents that could not seem more
different from one another. At first, flexibility was demanded in the
name of a critique of the value system imposed by bureaucratic-bourgeois
society (including the traditional organization of the workforce). It
originated in the new social movements that had formed in the late
1960s. Later on, toward the end of the 1970s, it then became one of the
central points of the neoliberal critique of the welfare state. With
completely different motives, both sides sang the praises of autonomy
and spontaneity while rejecting the disciplinary nature of hierarchical
organization. They demanded individuality and diversity rather than
conformity to prescribed roles. Experimentation, openness to []{#Page_18
type="pagebreak" title="18"}new ideas, flexibility, and change were now
established as fundamental values with positive connotations. Both
movements operated with the attractive idea of personal freedom. The new
social movements understood this in a social sense as the freedom of
personal development and coexistence, whereas neoliberals understood it
in an economic sense as the freedom of the market. In the 1980s, the
neoliberal ideas prevailed in large part because some of the values,
strategies, and methods propagated by the new social movements were
removed from their political context and appropriated in order to
breathe new life -- a "new spirit" -- into capitalism and thus to rescue
industrial society from its crisis.[^17^](#c1-note-0017){#c1-note-0017a}
An army of management consultants, restructuring experts, and new
companies began to promote flat hierarchies, self-responsibility, and
innovation; with these aims in mind, they set about reorganizing large
corporations into small and flexible units. Labor and leisure were no
longer supposed to be separated, for all aspects of a given person could
be integrated into his or her work. In order to achieve economic success
in this new capitalism, it became necessary for every individual to
identify himself or herself with his or her profession. Large
corporations were restructured in such a way that entire departments
found themselves transformed into independent "profit centers." This
happened in the name of creating more leeway for decision-making and of
optimizing the entrepreneurial spirit on all levels, the goals being to
increase value creation and to provide management with more fine-grained
powers of intervention. These measures, in turn, created the need for
computers and the need for them to be networked. Large corporations
reacted in this way to the emergence of highly specialized small
companies which, by networking and cooperating with other firms,
succeeded in quickly and flexibly exploiting niches in the expanding
global markets. In the management literature of the 1980s, the
catchphrases for this were "company networks" and "flexible
specialization."[^18^](#c1-note-0018){#c1-note-0018a} By the middle of
the 1990s, the sociologist Manuel Castells was able to conclude that the
actual productive entity was no longer the individual company but rather
the network consisting of companies and corporate divisions of various
sizes. In Castells\'s estimation, the decisive advantage of the network
is its ability to customize its elements and their configuration
[]{#Page_19 type="pagebreak" title="19"}to suit the rapidly changing
requirements of the "project" at
hand.[^19^](#c1-note-0019){#c1-note-0019a} Aside from a few exceptions,
companies in their trad­itional forms came to function above all as
strategic control centers and as economic and legal units.

This economic structural transformation was already well under way when
the internet emerged as a mass medium around the turn of the millennium.
As a consequence, change became more radical and penetrated into an
increasing number of areas of value creation. The political agenda
oriented itself toward the vision of "creative industries," a concept
developed in 1997 by the newly elected British government under Tony
Blair. A Creative Industries Task Force was established right away, and
its first step was to identify "those activities which have their
origins in individual creativity, skill and talent and which have the
potential for wealth and job creation through the generation and
exploit­ation of intellectual
property."[^20^](#c1-note-0020){#c1-note-0020a} Like Fritz Machlup at
the beginning of the 1960s, the task force brought together existing
areas of activity into a new category. Such activities included
advertising, computer games, architecture, music, arts and antique
markets, publishing, design, software and computer services, fashion,
television and radio, and film and video. The latter were elevated to
matters of political importance on account of their potential to create
wealth and jobs. Not least because of this clever presentation of
categories -- no distinction was made between the BBC, an almighty
public-service provider, and fledgling companies in precarious
circumstances -- it was possible to proclaim not only that the creative
industries were contributing a relevant portion of the nation\'s
economic output, but also that this sector was growing at an especially
fast rate. It was reported that, in London, the creative industries were
already responsible for one out of every five new jobs. When compared
with traditional terms of employment as regards income, benefits, and
prospects for advancement, however, many of these positions entailed a
considerable downgrade for the employees in question (who were now
treated as independent contractors). This fact was either ignored or
explicitly interpreted as a sign of the sector\'s particular
dynamism.[^21^](#c1-note-0021){#c1-note-0021a} Around the turn of the
new millennium, the idea that individual creativity plays a central role
in the economy was given further traction by []{#Page_20
type="pagebreak" title="20"}the sociologist and consultant Richard
Florida, who argued that creativity was essential to the future of
cities and even announced the rise of the "creative class." As to the
preconditions that have to be met in order to tap into this source of
wealth, he devised a simple formula that would be easy for municipal
bureaucrats to understand: "technology, tolerance and talent." Talent,
as defined by Florida, is based on individual creativity and education
and manifests itself in the ability to generate new jobs. He was thus
able to declare talent a central element of economic
growth.[^22^](#c1-note-0022){#c1-note-0022a} In order to "unleash" these
resources, what we need in addition to technology is, above all,
tolerance; that is, "an open culture -- one that does not discriminate,
does not force people into boxes, allows us to be ourselves, and
validates various forms of family and of human
identity."[^23^](#c1-note-0023){#c1-note-0023a}

The idea that a public welfare state should ensure the social security
of individuals was considered obsolete. Collective institutions, which
could have provided a degree of stability for people\'s lifestyles, were
dismissed or regarded as bureaucratic obstacles. The more or less
directly evoked role model for all of this was the individual artist,
who was understood as an individual entrepreneur, a sort of genius
suitable for the masses. For Florida, a central problem was that,
according to his own calculations, only about a third of the people
living in North American and European cities were working in the
"creative sector," while the innate creativity of everyone else was
going to waste. Even today, the term "creative industry," along with the
assumption that the internet will provide increased opportunities,
serves to legitimize the effort to restructure all areas of the economy
according to the needs of the knowledge economy and to privilege the
network over the institution. In times of social cutbacks and empty
public purses, especially in municipalities, this message was warmly
received. One mayor, who as the first openly gay top politician in
Germany exemplified tolerance for diverse lifestyles, even adopted the
slogan "poor but sexy" for his city. Everyone was supposed to exploit
his or her own creativity to discover new niches and opportunities for
monet­ization -- a magic formula that was supposed to bring about a new
urban revival. Today there is hardly a city in Europe that does not
issue a report about its creative economy, []{#Page_21 type="pagebreak"
title="21"}and nearly all of these reports cite, directly or indirectly,
Richard Florida.

As already seen in the context of the knowledge economy, so too in the
case of creative industries do measurable social change, wishful
thinking, and political agendas blend together in such a way that it is
impossible to identify a single cause for the developments taking place.
The consequences, however, are significant. Over the last two
generations, the demands of the labor market have fundamentally changed.
Higher education and the ability to acquire new knowledge independently
are now, to an increasing extent, required and expected as
qualifications and personal attributes. The desired or enforced ability
to be flexible at work, the widespread cooperation across institutions,
the uprooted nature of labor, and the erosion of collective models for
social security have displaced many activities, which once took place
within clearly defined institutional or personal limits, into a new
interstitial space that is neither private nor public in the classical
sense. This is the space of networks, communities, and informal
cooperation -- the space of sharing and exchange that has since been
enabled by the emergence of ubiquitous digital communication. It allows
an increasing number of people, whether willingly or otherwise, to
envision themselves as active producers of information, knowledge,
capability, and meaning. And because it is associated in various ways
with the space of market-based exchange and with the bourgeois political
sphere, it has lasting effects on both. This interstitial space becomes
all the more important as fewer people are willing or able to rely on
traditional institutions for their economic security. For, within it,
personal and digital-based networks can and must be developed as
alternatives, regardless of whether they prove sustainable for the long
term. As a result, more and more actors, each with their own claims to
meaning, have been rushing away from the private personal sphere into
this new interstitial space. By now, this has become such a normal
practice that whoever is *not* active in this ever-expanding
interstitial space, which is rapidly becoming the main social sphere --
whoever, that is, lacks a publicly visible profile on social mass media
like Facebook, or does not number among those producing information and
meaning and is thus so inconspicuous online as []{#Page_22
type="pagebreak" title="22"}to yield no search results -- now stands out
in a negative light (or, in far fewer cases, acquires a certain prestige
on account of this very absence).
:::

::: {.section}
### The erosion of heteronormativity {#c1-sec-0004}

In this (sometimes more, sometimes less) public space for the continuous
production of social meaning (and its exploit­ation), there is no
question that the professional middle class is
over-represented.[^24^](#c1-note-0024){#c1-note-0024a} It would be
short-sighted, however, to reduce those seeking autonomy and the
recognition of individuality and social diversity to the role of poster
children for the new spirit of
capitalism.[^25^](#c1-note-0025){#c1-note-0025a} The new social
movements, for instance, initiated a social shift that has allowed an
increasing number of people to demand, if nothing else, the right to
participate in social life in a self-determined manner; that is,
according to their own standards and values.

Especially effective was the critique of patriarchal and heteronormative
power relations, modes of conduct, and
identities.[^26^](#c1-note-0026){#c1-note-0026a} In the context of the
political upheavals at the end of the 1960s, the new women\'s and gay
movements developed into influential actors. Their greatest achievement
was to establish alternative cultural forms, lifestyles, and strategies
of action in or around the mainstream of society. How this was done can
be demonstrated by tracing, for example, the development of the gay
movement in West Germany.

In the fall of 1969, the liberalization of Paragraph 175 of the German
Criminal Code came into effect. From then on, sexual activity between
adult men was no longer punishable by law (women were not mentioned in
this context). For the first time, a man could now express himself as a
homosexual outside of semi-private space without immediately being
exposed to the risk of criminal prosecution. This was a necessary
precondition for the ability to defend one\'s own rights. As early as
1971, the struggle for the recognition of gay life experiences reached
the broader public when Rosa von Praunheim\'s film *It Is Not the
Homosexual Who Is Perverse, but the Society in Which He Lives* was
screened at the Berlin International Film Festival and then, shortly
thereafter, broadcast on public television in North Rhine-Westphalia.
The film, which is firmly situated in the agitprop tradition,
[]{#Page_23 type="pagebreak" title="23"}follows a young provincial man
through the various milieus of Berlin\'s gay subcultures: from a
monogamous relationship to nightclubs and public bathrooms until, at the
end, he is enlightened by a political group of men who explain that it
is not possible to lead a free life in a niche, as his own emancipation
can only be achieved by a transformation of society as a whole. The film
closes with a not-so-subtle call to action: "Out of the closets, into
the streets!" Von Praunheim understood this emancipation to be a process
that encompassed all areas of life and had to be carried out in public;
it could only achieve success, moreover, in solidarity with other
freedom movements such as the Black Panthers in the United States and
the new women\'s movement. The goal, according to this film, is to
articulate one\'s own identity as a specific and differentiated identity
with its own experiences, values, and reference systems, and to anchor
this identity within a society that not only tolerates it but also
recognizes it as having equal validity.

At first, however, the film triggered vehement controversies, even
within the gay scene. The objection was that it attacked the gay
subculture, which was not yet prepared to defend itself publicly against
discrimination. Despite or (more likely) because of these controversies,
more than 50 groups of gay activists soon formed in Germany. Such
groups, largely composed of left-wing alternative students, included,
for instance, the Homosexuelle Aktion Westberlin (HAW) and the Rote
Zelle Schwul (RotZSchwul) in Frankfurt am
Main.[^27^](#c1-note-0027){#c1-note-0027a} One focus of their activities
was to have Paragraph 175 struck entirely from the legal code (which was
not achieved until 1994). This cause was framed within a general
struggle to overcome patriarchy and capitalism. At the earliest gay
demonstrations in Germany, which took place in Münster in April 1972,
protesters rallied behind the following slogan: "Brothers and sisters,
gay or not, it is our duty to fight capitalism." This was understood as
a necessary subordination to the greater struggle against what was known
in the terminology of left-wing radical groups as the "main
contradiction" of capitalism (that between capital and labor), and it
led to strident differences within the gay movement. The dispute
escalated during the next year. After the so-called *Tuntenstreit*, or
"Battle of the Queens," which was []{#Page_24 type="pagebreak"
title="24"}initiated by activists from Italy and France who had appeared
in drag at the closing ceremony of the HAW\'s Spring Meeting in West
Berlin, the gay movement was divided, or at least moving in a new
direction. At the heart of the matter were the following questions: "Is
there an inherent (many speak of an autonomous) position that gays hold
with respect to the issue of homosexuality? Or can a position on
homosexuality only be derived in association with the traditional
workers\' movement?"[^28^](#c1-note-0028){#c1-note-0028a} In other
words, was discrimination against homosexuality part of the social
divide caused by capitalism (that is, one of its "ancillary
contradictions") and thus only to be overcome by overcoming capitalism
itself, or was it something unrelated to the "essence" of capitalism, an
independent conflict requiring different strategies and methods? This
conflict could never be fully resolved, but the second position, which
was more interested in overcoming legal, social, and cultural
discrimination than in struggling against economic exploitation, and
which focused specifically on the social liberation of gays, proved to
be far more dynamic in the long term. This was not least because both
the old and new left were themselves not free of homophobia and because
the entire radical student movement of the 1970s fell into crisis.

Over the course of the 1970s and 1980s, "aesthetic self-empowerment" was
realized through the efforts of artistic and (increasingly) commercial
producers of images, texts, and
sounds.[^29^](#c1-note-0029){#c1-note-0029a} Activists, artists, and
intellectuals developed a language with which they could speak
assertively in public about topics that had previously been taboo.
Inspired by the expression "gay pride," which originated in the United
States, they began to use the term *schwul* ("gay"), which until then
had possessed negative connotations, with growing confidence. They
founded numerous gay and lesbian cultural initiatives, theaters,
publishing houses, magazines, bookstores, meeting places, and other
associations in order to counter the misleading or (in their eyes)
outright false representations of the mass media with their own
multifarious media productions. In doing so, they typically followed a
dual strategy: on the one hand, they wanted to create a space for the
members of the movement in which it would be possible to formulate and
live different identities; on the other hand, they were fighting to be
accepted by society at large. While []{#Page_25 type="pagebreak"
title="25"}a broader and broader spectrum of gay positions, experiences,
and aesthetics was becoming visible to the public, the connection to
left-wing radical contexts became weaker. Founded as early as 1974, and
likewise in West Berlin, the General Homosexual Working Group
(Allgemeine Homosexuelle Arbeitsgemeinschaft) sought to integrate gay
politics into mainstream society by defining the latter -- on the basis
of bourgeois, individual rights -- as a "politics of
anti-discrimination." These efforts achieved a milestone in 1980 when,
in the run-up to the parliamentary election, a podium discussion was
held with representatives of all major political parties on the topic of
the law governing sexual offences. The discussion took place in the
Beethovenhalle in Bonn, which was the largest venue for political events
in the former capital. Several participants considered the event to be a
"disaster,"[^30^](#c1-note-0030){#c1-note-0030a} for it revived a number
of internal conflicts (not least that between revolutionary and
integrative positions). Yet the fact remains that representatives were
present from every political party, and this alone was indicative of an
unprecedented amount of public awareness for those demanding equal
rights.

The struggle against discrimination and for social recognition reached
an entirely new level of urgency with the outbreak of HIV/AIDS. In 1983,
the magazine *Der Spiegel* devoted its first cover story to the disease,
thus bringing it to the awareness of the broader public. In the same
year, the non-profit organization Deutsche Aids-Hilfe was founded to
prevent further cases of discrimination, for *Der Spiegel* was not the
only publication at the time to refer to AIDS as a "homosexual
epidemic."[^31^](#c1-note-0031){#c1-note-0031a} The struggle against
HIV/AIDS required a comprehensive mobilization. Funding had to be raised
in order to deal with the social repercussions of the epidemic, to teach
people about safe sexual practices for everyone and to direct research
toward discovering causes and developing potential cures. The immediate
threat that AIDS represented, especially while so little was known about
the illness and its treatment remained a distant hope, created an
impetus for mobilization that led to alliances between the gay movement,
the healthcare system, and public authorities. Thus, the AIDS Inquiry
Committee, sponsored by the conservative Christian Democratic Union,
concluded in 1988 that, in the fight against the illness, "the
homosexual subculture is []{#Page_26 type="pagebreak"
title="26"}especially important. This informal structure should
therefore neither be impeded nor repressed but rather, on the contrary,
recognized and supported."[^32^](#c1-note-0032){#c1-note-0032a} The AIDS
crisis proved to be a catalyst for advancing the integration of gays
into society and for expanding what could be regarded as acceptable
lifestyles, opinions, and cultural practices. As a consequence,
homosexuals began to appear more frequently in the media, though their
presence would never match that of hetero­sexuals. As of 1985, the
television show *Lindenstraße* featured an openly gay protagonist, and
the first kiss between men was aired in 1987. The episode still provoked
a storm of protest -- Bayerische Rundfunk refused to broadcast it a
second time -- but this was already a rearguard action and the
integration of gays (and lesbians) into the social mainstream continued.
In 1993, the first gay and lesbian city festival took place in Berlin,
and the first Rainbow Parade was held in Vienna in 1996. In 2002, the
Cologne Pride Day involved 1.2 million participants and attendees, thus
surpassing for the first time the attendance at the traditional Rose
Monday parade. By the end of the 1990s, the sociologist Rüdiger Lautmann
was already prepared to maintain: "To be homosexual has become
increasingly normalized, even if homophobia lives on in the depths of
the collective disposition."[^33^](#c1-note-0033){#c1-note-0033a} This
normalization was also reflected in a study published by the Ministry of
Justice in the year 2000, which stressed "the similarity between
homosexual and heterosexual relationships" and, on this basis, made an
argument against discrimination.[^34^](#c1-note-0034){#c1-note-0034a}
Around the year 2000, however, the classical gay movement had already
passed its peak. A profound transformation had begun to take place in
the middle of the 1990s. It lost its character as a new social movement
(in the style of the 1970s) and began to splinter inwardly and
outwardly. One could say that it transformed from a mass movement into a
multitude of variously networked communities. The clearest sign of this
transformation is the abbreviation "LGBT" (lesbian, gay, bisexual, and
transgender), which, since the mid-1990s, has represented the internal
heterogeneity of the movement as it has shifted toward becoming a
network.[^35^](#c1-note-0035){#c1-note-0035a} At this point, the more
radical actors were already speaking against the normalization of
homosexuality. Queer theory, for example, was calling into question the
"essentialist" definition of gender []{#Page_27 type="pagebreak"
title="27"}-- that is, any definition reducing it to an immutable
essence -- with respect to both its physical dimension (sex) and its
social and cultural dimension (gender
proper).[^36^](#c1-note-0036){#c1-note-0036a} It thus opened up a space
for the articulation of experiences, self-descriptions, and lifestyles
that, on every level, are located beyond the classical attributions of
men and women. A new generation of intellectuals, activists, and artists
took the stage and developed -- yet again through acts of aesthetic
self-empowerment -- a language that enabled them to import, with
confidence, different self-definitions into the public sphere. An
example of this is the adoption of inclusive plural forms in German
(*Aktivist\_innen* "activists," *Künstler\_innen* "artists"), which draw
attention to the gaps and possibilities between male and female
identities that are also expressed in the language itself. Just as with
the terms "gay" or *schwul* some 30 years before, in this case, too, an
important element was the confident and public adoption and semantic
conversion of a formerly insulting word ("queer") by the very people and
communities against whom it used to be
directed.[^37^](#c1-note-0037){#c1-note-0037a} Likewise observable in
these developments was the simultaneity of social (amateur) and
artistic/scientific (professional) cultural production. The goal,
however, was less to produce a clear antithesis than it was to oppose
rigid attributions by underscoring mutability, hybridity, and
uniqueness. Both the scope of what could be expressed in public and the
circle of potential speakers expanded yet again. And, at least to some
extent, the drag queen Conchita Wurst popularized complex gender
constructions that went beyond the simple woman/man dualism. All of that
said, the assertion by Rüdiger Lautmann quoted above -- "homophobia
lives on in the depths of the collective dis­position" -- continued to
hold true.

If the gay movement is representative of the social liber­ation of the
1970s and 1980s, then it is possible to regard its transformation into
the LGBT movement during the 1990s -- with its multiplicity and fluidity
of identity models and its stress on mutability and hybridity -- as a
sign of the reinvention of this project within the context of an
increasingly dominant digital condition. With this transformation,
however, the diversification and fluidification of cultural practices
and social roles have not yet come to an end. Ways of life that were
initially subcultural and facing existential pressure []{#Page_28
type="pagebreak" title="28"}are gradually entering the mainstream. They
are expanding the range of readily available models of identity for
anyone who might be interested, be it with respect to family forms
(e.g., patchwork families, adoption by same-sex couples), diets (e.g.,
vegetarianism and veganism), healthcare (e.g., anti-vaccination), or
other principles of life and belief. All of them are seeking public
recognition for a new frame of reference for social meaning that has
originated from their own activity. This is necessarily a process
characterized by conflicts and various degrees of resistance, including
right-wing populism that seeks to defend "traditional values," but many
of these movements will ultimately succeed in providing more people with
the opportunity to speak in public, thus broadening the palette of
themes that are considered to be important and legitimate.
:::

::: {.section}
### Beyond center and periphery {#c1-sec-0005}

In order to reach a better understanding of the complexity involved in
the expanding social basis of cultural production, it is necessary to
shift yet again to a different level. For, just as it would be myopic to
examine the multiplication of cultural producers only in terms of
professional knowledge workers from the middle class, it would likewise
be insufficient to situate this multiplication exclusively in the
centers of the West. The entire system of categories that justified the
differentiation between the cultural "center" and the cultural
"periphery" has begun to falter. This complex and multilayered process
has been formulated and analyzed by the theory of "post-colonialism."
Long before digital media made the challenge of cultural multiplicity a
quotidian issue in the West, proponents of this theory had developed
languages and terminologies for negotiating different positions without
needing to impose a hierarchical order.

Since the 1970s, the theoretical current of post-colonialism has been
examining the cultural and epistemic dimensions of colonialism that,
even after its end as a territorial system, have remained responsible
for the continuation of dependent relations and power differentials. For
my purposes -- which are to develop a European perspective on the
factors ensuring that more and more people are able to participate in
cultural []{#Page_29 type="pagebreak" title="29"}production -- two
points are especially relevant because their effects reverberate in
Europe itself. First is the deconstruction of the categories "West" (in
the sense of the center) and "East" (in the sense of the periphery). And
second is the focus on hybridity as a specific way for non-Western
actors to deal with the dominant cultures of former colonial powers,
which have continued to determine significant portions of globalized
culture. The terms "West" and "East," "center" and "periphery," do not
simply describe existing conditions; rather, they are categories that
contribute, in an important way, to the creation of the very conditions
that they presume to describe. This may sound somewhat circular, but it
is precisely from this circularity that such cultural classifications
derive their strength. The world that they illuminate is immersed in
their own light. The category "East" -- or, to use the term of the
literary theorist Edward Said,
"orientalism"[^38^](#c1-note-0038){#c1-note-0038a} -- is a system of
representation that pervades Western thinking. Within this system,
Europe or the West (as the center) and the East (as the periphery)
represent asymmetrical and antithetical concepts. This construction
achieves a dual effect. As a self-description, on the one hand, it
contributes to the formation of our own identity, for Europeans
attrib­ute to themselves and to their continent such features as
"rationality," "order," and "progress," while on the other hand
identifying the alternative with "superstition," "chaos," or
"stagnation." The East, moreover, is used as an exotic projection screen
for our own suppressed desires. According to Said, a representational
system of this sort can only take effect if it becomes "hegemonic"; that
is, if it is perceived as self-evident and no longer as an act of
attribution but rather as one of description, even and precisely by
those against whom the system discriminates. Said\'s accomplishment is
to have worked out how far-reaching this system was and, in many areas,
it remains so today. It extended (and extends) from scientific
disciplines, whose researchers discussed (until the 1980s) the theory of
"oriental despotism,"[^39^](#c1-note-0039){#c1-note-0039a} to literature
and art -- the motif of the harem was especially popular, particularly
in paintings of the late nineteenth
century[^40^](#c1-note-0040){#c1-note-0040a} -- all the way to everyday
culture, where, as of 1913 in the United States, the cigarette brand
Camel (introduced to compete with the then-leading brand, Fatima) was
meant to evoke the []{#Page_30 type="pagebreak" title="30"}mystique and
sensuality of the Orient.[^41^](#c1-note-0041){#c1-note-0041a} This
system of representation, however, was more than a means of describing
oneself and others; it also served to legitimize the allocation of all
knowledge and agency on to one side, that of the West. Such an order was
not restricted to culture; it also created and legitimized a sense of
domination for colonial projects.[^42^](#c1-note-0042){#c1-note-0042a}
This cultural legitimation, as Said points out, also persists after the
end of formal colonial domination and continues to marginalize the
postcolonial subjects. As before, they are unable to speak for
themselves and therefore remain in the dependent periphery, which is
defined by their subordinate position in relation to the center. Said
directed the focus of critique to this arrangement of center and
periphery, which he saw as being (re)produced and legitimized on the
cultural level. From this arose the demand that everyone should have the
right to speak, to place him- or herself in the center. To achieve this,
it was necessary first of all to develop a language -- indeed, a
cultural landscape -- that can manage without a hegemonic center and is
thus oriented toward multiplicity instead of
uniformity.[^43^](#c1-note-0043){#c1-note-0043a}

A somewhat different approach has been taken by the literary theorist
Homi K. Bhabha. He proceeds from the idea that the colonized never fully
passively adopt the culture of the colonialists -- the "English book,"
as he calls it. Their previous culture is never simply wiped out and
replaced by another. What always and necessarily occurs is rather a
process of hybridization. This concept, according to Bhabha,

::: {.extract}
suggests that all of culture is constructed around negotiations and
conflicts. Every cultural practice involves an attempt -- sometimes
good, sometimes bad -- to establish authority. Even classical works of
art, such as a painting by Brueghel or a composition by Beethoven, are
concerned with the establishment of cultural authority. Now, this poses
the following question: How does one function as a negotiator when
one\'s own sense of agency is limited, for instance, on account of being
excluded or oppressed? I think that, even in the role of the underdog,
there are opportunities to upend the imposed cultural authorities -- to
accept some aspects while rejecting others. It is in this way that
symbols of authority are hybridized and made into something of one\'s
own. For me, hybridization is not simply a mixture but rather a
[]{#Page_31 type="pagebreak" title="31"}strategic and selective
appropriation of meanings; it is a way to create space for negotiators
whose freedom and equality are
endangered.[^44^](#c1-note-0044){#c1-note-0044a}
:::

Hybridization is thus a cultural strategy for evading marginality that
is imposed from the outside: subjects, who from the dominant perspective
are incapable of doing so, appropriate certain aspects of culture for
themselves and transform them into something else. What is decisive is
that this hybrid, created by means of active and unauthorized
appropriation, opposes the dominant version and the resulting speech is
thus legitimized from another -- that is, from one\'s own -- position.
In this way, a cultural engagement is set under way and the superiority
of one meaning or another is called into question. Who has the right to
determine how and why a relationship with others should be entered,
which resources should be appropriated from them, and how these
resources should be used? At the heart of the matter lie the abilities
of speech and interpretation; these can be seized in order to create
space for a "cultural hybridity that entertains difference without an
assumed or imposed hierarchy."[^45^](#c1-note-0045){#c1-note-0045a}

At issue is thus a strategy for breaking down hegemonic cultural
conditions, which distribute agency in a highly uneven manner, and for
turning one\'s own cultural production -- which has been dismissed by
cultural authorities as flawed, misconceived, or outright ignorant --
into something negotiable and independently valuable. Bhabha is thus
interested in fissures, differences, diversity, multiplicity, and
processes of negotiation that generate something like shared meaning --
culture, as he defines it -- instead of conceiving of it as something
that precedes these processes and is threatened by them. Accordingly, he
proceeds not from the idea of unity, which is threatened whenever
"others" are empowered to speak and needs to be preserved, but rather
from the irreducible multiplicity that, through laborious processes, can
be brought into temporary and limited consensus. Bhabha\'s vision of
culture is one without immutable authorities, interpretations, and
truths. In theory, everything can be brought to the table. This is not a
situation in which anything goes, yet the central meaning of
negotiation, the contextuality of consensus, and the mutability of every
frame of reference []{#Page_32 type="pagebreak" title="32"}-- none of
which can be shared equally by everyone -- are always potentially
negotiable.

Post-colonialism draws attention to the "disruptive power of the
excluded-included third," which becomes especially virulent when it
"emerges in the middle of semantic
structures."[^46^](#c1-note-0046){#c1-note-0046a} The recognition of
this power reveals the increasing cultural independence of those
formerly colonized, and it also transforms the cultural self-perception
of the West, for, even in Western nations that were not significant
colonial powers, there are multifaceted tensions between dominant
cultures and those who are on the defensive against discrimination and
attributions by others. Instead of relying on the old recipe of
integration through assimilation (that is, the dissolution of the
"other"), the right to self-determined difference is being called for
more emphatically. In such a manner, collective identities, such as
national identities, are freed from their questionable appeals to
cultural homogeneity and essentiality, and reconceived in terms of the
experience of immanent difference. Instead of one binding and
unnegotiable frame of reference for everyone, which hierarchizes
individual pos­itions and makes them appear unified, a new order without
such limitations needs to be established. Ultimately, the aim is to
provide nothing less than an "alternative reading of
modernity,"[^47^](#c1-note-0047){#c1-note-0047a} which influences both
the construction of the past and the modalities of the future. For
European culture in particular, such a project is an immense challenge.

Of course, these demands do not derive their everyday relevance
primarily from theory but rather from the experiences of
(de)colonization, migration, and globalization. Multifaceted as it is,
however, the theory does provide forms and languages for articulating
these phenomena, legitimizing new positions in public debates, and
attacking persistent mechanisms of cultural marginalization. It helps to
empower broader societal groups to become actively involved in cultural
processes, namely people, such as migrants and their children, whose
identity and experience are essentially shaped by non-Western cultures.
The latter have been giving voice to their experiences more frequently
and with greater confidence in all areas of public life, be it in
politics, literature, music, or
art.[^48^](#c1-note-0048){#c1-note-0048a} In Germany, for instance, the
films by Fatih Akin (*Head-On* from 2004 and *Soul Kitchen* from 2009,
to []{#Page_33 type="pagebreak" title="33"}name just two), in which the
experience of immigration is represented as part of the German
experience, have reached a wide public audience. In 2002, the group
Kanak Attak organized a series of conferences with the telling motto *no
integración*, and these did much to introduce postcolonial positions to
the debates taking place in German-speaking
countries.[^49^](#c1-note-0049){#c1-note-0049a} For a long time,
politicians with "migration backgrounds" were considered to be competent
in only one area, namely integration policy. This has since changed,
though not entirely. In 2008, for instance, Cem Özdemir was elected
co-chair of the Green Party and thus shares responsibility for all of
its political positions. Developments of this sort have been enabled
(and strengthened) by a shift in society\'s self-perception. In 2014,
Cemile Giousouf, the integration commissioner for the conservative
CDU/CSU alliance in the German Parliament, was able to make the
following statement without inciting any controversy: "Over the past few
years, Germany has become a modern land of
immigration."[^50^](#c1-note-0050){#c1-note-0050a} A remarkable
proclamation. Not ten years earlier, her party colleague Norbert Lammert
had expressed, in his function as parliamentary president, interest in
reviving the debate about the term "leading culture." The increasingly
well-educated migrants of the first, second, or third gener­ation no
longer accept the choice of being either marginalized as an exotic
representative of the "other" or entirely assimilated. Rather, they are
insisting on being able to introduce their specific experience as a
constitutive contribution to the formation of the present -- in
association and in conflict with other contributions, but at the same
level and with the same legitimacy. It is no surprise that various forms
of discrimin­ation and violence against "foreigners" not only continue
in everyday life but have also been increasing in reaction to this new
situation. Ultimately, established claims to power are being called into
question.

To summarize, at least three secular historical tendencies or movements,
some of which can be traced back to the late nineteenth century but each
of which gained considerable momentum during the last third of the
twentieth (the spread of the knowledge economy, the erosion of
heteronormativity, and the focus of post-colonialism on cultural
hybridity), have greatly expanded the sphere of those who actively
negotiate []{#Page_34 type="pagebreak" title="34"}social meaning. In
large part, the patterns and cultural foundations of these processes
developed long before the internet. Through the use of the internet, and
through the experiences of dealing with it, they have encroached upon
far greater portions of all societies.
:::
:::

::: {.section}
The Culturalization of the World {#c1-sec-0006}
--------------------------------

The number of participants in cultural processes, however, is not the
only thing that has increased. Parallel to that development, the field
of the cultural has expanded as well -- that is, those areas of life
that are not simply characterized by unalterable necessities, but rather
contain or generate competing options and thus require conscious
decisions.

The term "culturalization of the economy" refers to the central position
of knowledge-based, meaning-based, and affect-oriented processes in the
creation of value. With the emergence of consumption as the driving
force behind the production of goods and the concomitant necessity of
having not only to satisfy existing demands but also to create new ones,
the cultural and affective dimensions of the economy began to gain
significance. I have already discussed the beginnings of product
staging, advertising, and public relations. In addition to all of the
continuities that remain with us from that time, it is also possible to
point out a number of major changes that consumer society has undergone
since the late 1960s. These changes can be delineated by examining the
greater role played by design, which has been called the "core
discipline of the creative
economy."[^51^](#c1-note-0051){#c1-note-0051a}

As a field of its own, design originated alongside industrialization,
when, in collaborative processes, the activities of planning and
designing were separated from those of carrying out
production.[^52^](#c1-note-0052){#c1-note-0052a} It was not until the
modern era that designers consciously endeavored to seek new forms for
the logic inherent to mass production. With the aim of economic
efficiency, they intended their designs to optimize the clearly defined
functions of anonymous and endlessly reproducible objects. At the end of
the nineteenth century, the architect Louis Sullivan, whose buildings
still distinguish the skyline of Chicago, condensed this new attitude
into the famous axiom []{#Page_35 type="pagebreak" title="35"}"form
follows function." Mies van der Rohe, working as an architect in Chicago
in the middle of the twentieth century, supplemented this with a pithy
and famous formulation of his own: "less is more." The rationality of
design, in the sense of isolating and improving specific functions, and
the economical use of resources were of chief importance to modern
(industrial) designers. Even the ten design principles of Dieter Rams,
who led the design division of the consumer products company Braun from
1965 to 1991 -- one of the main sources of inspiration for Jonathan Ive,
Apple\'s chief design officer -- aimed to make products "usable,"
"understandable," "honest," and "long-lasting." "Good design," according
to his guiding principle, "is as little design as
possible."[^53^](#c1-note-0053){#c1-note-0053a} This orientation toward
the technical and functional promised to solve problems for everyone in
a long-term and binding manner, for the inherent material and design
qual­ities of an object were supposed to make it independent from
changing times and from the tastes of consumers.

::: {.section}
### Beyond the object {#c1-sec-0007}

At the end of the 1960s, a new generation of designers rebelled against
this industrial and instrumental rationality, which was now felt to be
authoritarian, soulless, and reductionist. In the works associated with
"anti-design" or "radical design," the objectives of the discipline were
redefined and a new formal language was developed. In the place of
tech­nical and functional optimization, recombination -- ecological
recycling or the postmodern interplay of forms -- emerged as a design
method and aesthetic strategy. Moreover, the aspiration of design
shifted from the individual object to its entire social and material
environment. The processes of design and production, which had been
closed off from one another and restricted to specialists, were opened
up precisely to encourage the participation of non-designers, be it
through interdisciplinary cooperation with other types of professions or
through the empowerment of laymen. The objectives of design were
radically expanded: rather than ending with the completion of an
individual product, it was now supposed to engage with society. In the
sense of cybernetics, this was regarded as a "system," controlled by
feedback processes, []{#Page_36 type="pagebreak" title="36"}which
connected social, technical, and biological dimensions to one
another.[^54^](#c1-note-0054){#c1-note-0054a} Design, according to this
new approach, was meant to be a "socially significant
activity."[^55^](#c1-note-0055){#c1-note-0055a}

Embedded in the social movements of the 1960s and 1970s, this new
generation of designers was curious about the social and political
potential of their discipline, and about possibilities for promoting
flexibility and autonomy instead of rigid industrial efficiency. Design
was no longer expected to solve problems once and for all, for such an
idea did not correspond to the self-perception of an open and mutable
society. Rather, it was expected to offer better opportun­ities for
enabling people to react to continuously changing conditions. A radical
proposal was developed by the Italian designer Enzo Mari, who in 1974
published his handbook *Autoprogettazione* (Self-Design). It contained
19 simple designs with which people could make, on their own,
aesthetically and functionally sophisticated furniture out of pre-cut
pieces of wood. In this case, the designs themselves were less important
than the critique of conventional design as elitist and of consumer
society as alienated and wasteful. Mari\'s aim was to reconceive the
relations among designers, the manufacturing industry, and users.
Increasingly, design came to be understood as a holistic and open
process. Victor Papanek, the founder of ecological design, took things a
step further. For him, design was "basic to all human activity. The
planning and patterning of any act towards a desired, foreseeable end
constitutes the design process. Any attempt to separate design, to make
it a thing-by-itself, works counter to the inherent value of design as
the primary underlying matrix of
life."[^56^](#c1-note-0056){#c1-note-0056a}

Potentially all aspects of life could therefore fall under the purview
of design. This came about from the desire to oppose industrialism,
which was blind to its catastrophic social and ecological consequences,
with a new and comprehensive manner of seeing and acting that was
unrestricted by economics.

Toward the end of the 1970s, this expanded notion of design owed less
and less to emancipatory social movements, and its socio-political goals
began to fall by the wayside. Three fundamental patterns survived,
however, which go beyond design and remain characteristic of the
culturalization []{#Page_37 type="pagebreak" title="37"}of the economy:
the discovery of the public as emancipated users and active
participants; the use of appropriation, transformation, and
recombination as methods for creating ever-new aesthetic
differentiations; and, finally, the intention of shaping the lifeworld
of the user.[^57^](#c1-note-0057){#c1-note-0057a}

As these patterns became depoliticized and commercialized, the focus of
designing the "lifeworld" shifted more and more toward designing the
"experiential world." By the end of the 1990s, this had become so
normalized that even management consultants could assert that
"\[e\]xperiences represent an existing but previously unarticulated
*genre of economic output*."[^58^](#c1-note-0058){#c1-note-0058a} It was
possible to define the dimensions of the experiential world in various
ways. For instance, it could be clearly delimited and product-oriented,
like the flagship stores introduced by Nike in 1990, which, with their
elaborate displays, were meant to turn shopping into an experience. This
experience, as the company\'s executives hoped, radiated outward and
influenced how the brand was perceived as a whole. The experiential
world could also, however, be conceived in somewhat broader terms, for
instance by design­ing entire institutions around the idea of creating a
more attractive work environment and thereby increasing the commitment
of employees. This approach is widespread today in creative industries
and has become popularized through countless stories about ping-pong
tables, gourmet cafeterias, and massage rooms in certain offices. In
this case, the process of creativity is applied back to itself in order
to systematize and optimize a given workplace\'s basis of operation. The
development is comparable to the "invention of invention" that
characterized industrial research around the end of the nineteenth
century, though now the concept has been re­located to the field of
knowledge production.

Yet the "experiential world" can be expanded even further, for instance
when entire cities attempt to make themselves attractive to
international clientele and compete with others by building spectacular
museums or sporting arenas. Displays in cities, as well as a few other
central locations, are regularly constructed in order to produce a
particular experience. This also entails, however, that certain forms of
use that fail to fit the "urban
script"[^59^](#c1-note-0059){#c1-note-0059a} are pushed to the margins
or driven away.[^60^](#c1-note-0060){#c1-note-0060a} Thus, today, there
is hardly a single area of life to []{#Page_38 type="pagebreak"
title="38"}which the strategies and methods of design do not have
access, and this access occurs at all levels. For some time, design has
not been a purely visible matter, restricted to material objects; it
rather forms and controls all of the senses. Cities, for example, have
come to be understood increasingly as "sound spaces" and have
accordingly been reconfigured with the goal of modulating their various
noises.[^61^](#c1-note-0061){#c1-note-0061a} Yet design is no longer
just a matter of objects, processes, and experiences. By now, in the
context of reproductive medicine, it has even been applied to the
biological foundations of life ("designer babies"). I will revisit this
topic below.
:::

::: {.section}
### Culture everywhere {#c1-sec-0008}

Of course, design is not the only field of culture that has imposed
itself over society as a whole. A similar development has occurred in
the field of advertising, which, since the 1970s, has been integrated
into many more physical and social spaces and by now has a broad range
of methods at its disposal. Advertising is no longer found simply on
billboards or in display windows. In the form of "guerilla marketing" or
"product placement," it has penetrated every space and occupied every
discourse -- by blending with political messages, for instance -- and
can now even be spread, as "viral marketing," by the addressees of the
advertisements themselves. Similar processes can be observed in the
fields of art, fashion, music, theater, and sports. This has taken place
perhaps most radically in the field of "gaming," which has drawn upon
technical progress in the most direct possible manner and, with the
spread of powerful computers and mobile applications, has left behind
the confines of the traditional playing field. In alternate reality
games, the realm of the virtual and fictitious has also been
transcended, as physical spaces have been overlaid with their various
scripts.[^62^](#c1-note-0062){#c1-note-0062a}

This list could be extended, but the basic trend is clear enough,
especially as the individual fields overlap and mutually influence one
another. They are blending into a single interdependent field for
generating social meaning in the form of economic activity. Moreover,
through digitalization and networking, many new opportunities have
arisen for large-scale involvement by the public in design processes.
Thanks []{#Page_39 type="pagebreak" title="39"}to new communication
technologies and flexible production processes, today\'s users can
personalize and create products to suit their wishes. Here, the spectrum
extends from tiny batches of creative-industrial products all the way to
global processes of "mass customization," in which factory-based mass
production is combined with personalization. One of the first
applications of this was introduced in 1999 when, through its website, a
sporting-goods company allowed customers to design certain elements of a
shoe by altering it within a set of guidelines. This was taken a step
further by the idea of "user-centered innovation," which relies on the
specific knowledge of users to enhance a product, with the additional
hope of discovering unintended applications and transforming these into
new areas of business.[^63^](#c1-note-0063){#c1-note-0063a} It has also
become possible for end users to take over the design process from the
beginning, which has become considerably easier with the advent of
specialized platforms for exchanging knowledge, alongside semi-automated
production tools such as mechanical mills and 3D printers.
Digitalization, which has allowed all content to be processed, and
networking, which has created an endless amount of content ("raw
material"), have turned appropriation and recombination into general
methods of cultural production.[^64^](#c1-note-0064){#c1-note-0064a}
This phenomenon will be examined more closely in the next chapter.

Both the involvement of users in the production process and the methods
of appropriation and recombination are extremely information-intensive
and communication-intensive. Without the corresponding technological
infrastructure, neither could be achieved efficiently or on a large
scale. This was evident in the 1970s, when such approaches never made it
beyond subcultures and conceptual studies. With today\'s search engines,
every single user can trawl through an amount of information that, just
a generation ago, would have been unmanageable even by professional
archivists. A broad array of communication platforms (together with
flexible production capacities and efficient logistics) not only weakens
the contradiction between mass fabrication and personalization; it also
allows users to network directly with one another in order to develop
specialized knowledge together and thus to enable themselves to
intervene directly in design processes, both as []{#Page_40
type="pagebreak" title="40"}willing participants in and as critics of
flexible global production processes.
:::
:::

::: {.section}
The Technologization of Culture {#c1-sec-0009}
-------------------------------

That society is dependent on complex information technologies in order
to organize its constitutive processes is, in itself, nothing new.
Rather, this began as early as the late nineteenth century. It is
directly correlated with the expansion and acceleration of the
circulation of goods, which came about through industrialization. As the
historian and sociologist James Beniger has noted, this led to a
"control crisis," for administrative control centers were faced with the
problem of losing sight of what was happening in their own factories,
with their suppliers, and in the important markets of the time.
Management was in a bind: decisions had to be made either on the basis
of insufficient information or too late. The existing administrative and
control mechanisms could no longer deal with the rapidly increasing
complexity and time-sensitive nature of extensively organized production
and distribution. The office became more important, and ever more people
were needed there to fulfill a growing number of functions. Yet this was
not enough for the crisis to subside. The old administrative methods,
which involved manual information processing, simply could no longer
keep up. The crisis reached its first dramatic peak in 1889 in the
United States, with the realization that the census data from the year
1880 had not yet been analyzed when the next census was already
scheduled to take place during the subsequent year. In the same year,
the Secretary of the Interior organized a conference to investigate
faster methods of data processing. Two methods were tested for making
manual labor more efficient, one of which had the potential to achieve
greater efficiency by means of novel data-processing machines. The
latter system emerged as the clear victor; developed by an engineer
named Hermann Hollerith, it mechanically processed and stored data on
punch cards. The idea was based on Hollerith\'s observations of the
coup­ling and decoupling of railroad cars, which he interpreted as
modular units that could be combined in any desired order. The punch
card transferred this approach to information []{#Page_41
type="pagebreak" title="41"}management. Data were no longer stored in
fixed, linear arrangements (tables and lists) but rather in small units
(the punch cards) that, like railroad cars, could be combined in any
given way. The increase in efficiency -- with respect to speed *and*
flexibility -- was enormous, and nearly a hundred of Hollerith\'s
machines were used by the Census
Bureau.[^65^](#c1-note-0065){#c1-note-0065a} This marked a turning point
in the history of information processing, with technical means no longer
being used exclusively to store data, but to process data as well. This
was the only way to avoid the impending crisis, ensuring that
bureaucratic management could maintain centralized control. Hollerith\'s
machines proved to be a resounding success and were implemented in many
more branches of government and corporate administration, where
data-intensive processes had increased so rapidly they could not have
been managed without such machines. This growth was accompanied by that
of Hollerith\'s Tabulating Machine Company, which he founded in 1896 and
which, after a number of mergers, was renamed in 1924 as the
International Business Machines Corporation (IBM). Throughout the
following decades, dependence on information-processing machines only
deepened. The growing number of social, commercial, and military
processes could only be managed by means of information technology. This
largely took place, however, outside of public view, namely in the
specialized divisions of large government and private organizations.
These were the only institutions in command of the necessary resources
for operating the complex technical infrastructure -- so-called
mainframe computers -- that was essential to automatic information
processing.

::: {.section}
### The independent media {#c1-sec-0010}

As with so much else, this situation began to change in the 1960s. Mass
media and information-processing technologies began to attract
criticism, even though all of the involved subcultures, media activists,
and hackers continued to act independently from one another until the
1990s. The freedom-oriented social movements of the 1960s began to view
the mass media as part of the political system against which they were
struggling. The connections among the economy, politics, and the media
were becoming more apparent, not []{#Page_42 type="pagebreak"
title="42"}least because many mass media companies, especially those in
Germany related to the Springer publishing house, were openly inimical
to these social movements. Critical theor­ies arose that, borrowing
Louis Althusser\'s influential term, regarded the media as part of the
"ideological state apparatus"; that is, as one of the authorities whose
task is to influence people to accept social relations to such a degree
that the "repressive state apparatuses" (the police, the military, etc.)
form a constant background in everyday
life.[^66^](#c1-note-0066){#c1-note-0066a} Similarly influential,
Antonio Gramsci\'s theory of "cultural hegemony" emphasized the
condition in which the governed are manipulated to form a cultural
consensus with the ruling class; they accept the latter\'s
presuppositions (and the politics which are thus justified) even though,
by doing so, they are forced to suffer economic
disadvantages.[^67^](#c1-note-0067){#c1-note-0067a} Guy Debord and the
Situationists attributed to the media a central role in the new form of
rule known as "the spectacle," the glittery surfaces and superficial
manifestations of which served to conceal society\'s true
relations.[^68^](#c1-note-0068){#c1-note-0068a} In doing so, they
aligned themselves with the critique of the "culture industry," which
had been formulated by Max Horkheimer and Theodor W. Adorno at the
beginning of the 1940s and had become a widely discussed key text by the
1960s.

Their differences aside, these perspectives were united in that they no
longer understood the "public" as a neutral sphere, in which citizens
could inform themselves freely and form their opinions, but rather as
something that was created with specific intentions and consequences.
From this grew an interest in "counter-publics"; that is, in forums
where other actors could appear and negotiate theories of their own. The
mass media thus became an important instrument for organizing the
bourgeois--capitalist public, but they were also responsible for the
development of alternatives. Media, according to one of the core ideas
of these new approaches, are less a sphere in which an external reality
is depicted; rather, they are themselves a constitutive element of
reality.
:::

::: {.section}
### Media as lifeworlds {#c1-sec-0011}

Another branch of new media theories, that of Marshall McLuhan and the
Toronto School of Communication,[^69^](#c1-note-0069){#c1-note-0069a}
[]{#Page_43 type="pagebreak" title="43"}reached a similar conclusion on
different grounds. In 1964, McLuhan aroused a great deal of attention
with his slogan "the medium is the message." He maintained that every
medium of communication, by means of its media-specific characteristics,
directly affected the consciousness, self-perception, and worldview of
every individual.[^70^](#c1-note-0070){#c1-note-0070a} This, he
believed, happens independently of and in addition to whatever specific
message a medium might be conveying. From this perspective, reality does
not exist outside of media, given that media codetermine our personal
relation to and behavior in the world. For McLuhan and the Toronto
School, media were thus not channels for transporting content but rather
the all-encompassing environments -- galaxies -- in which we live.

Such ideas were circulating much earlier and were intensively developed
by artists, many of whom were beginning to experiment with new
electronic media. An important starting point in this regard was the
1963 exhibit *Exposition of Music -- Electronic Television* by the
Korean artist Nam June Paik, who was then collaborating with Karlheinz
Stockhausen in Düsseldorf. Among other things, Paik presented 12
television sets, the screens of which were "distorted" by magnets. Here,
however, "distorted" is a problematic term, for, as Paik explicitly
noted, the electronic images were "a beautiful slap in the face of
classic dualism in philosophy since the time of Plato. \[...\] Essence
AND existence, essentia AND existentia. In the case of the electron,
however, EXISTENTIA IS ESSENTIA."[^71^](#c1-note-0071){#c1-note-0071a}
Paik no longer understood the electronic image on the television screen
as a portrayal or representation of anything. Rather, it engendered in
the moment of its appearance an autonomous reality beyond and
independent of its representational function. A whole generation of
artists began to explore forms of existence in electronic media, which
they no longer understood as pure media of information. In his work
*Video Corridor* (1969--70), Bruce Nauman stacked two monitors at the
end of a corridor that was approximately 10 meters long but only 50
centimeters wide. On the lower monitor ran a video showing the empty
hallway. The upper monitor displayed an image captured by a camera
installed at the entrance of the hall, about 3 meters high. If the
viewer moved down the corridor toward the two []{#Page_44
type="pagebreak" title="44"}monitors, he or she would thus be recorded
by the latter camera. Yet the closer one came to the monitor, the
farther one would be from the camera, so that one\'s image on the
monitor would become smaller and smaller. Recorded from behind, viewers
would thus watch themselves walking away from themselves. Surveillance
by others, self-surveillance, recording, and disappearance were directly
and intuitively connected with one another and thematized as fundamental
issues of electronic media.

Toward the end of the 1960s, the easier availability and mobility of
analog electronic production technologies promoted the search for
counter-publics and the exploration of media as comprehensive
lifeworlds. In 1967, Sony introduced its first Portapak system: a
battery-powered, self-contained recording system -- consisting of a
camera, a cord, and a recorder -- with which it was possible to make
(black-and-white) video recordings outside of a studio. Although the
recording apparatus, which required additional devices for editing and
projection, was offered at the relatively expensive price of \$1,500
(which corresponds to about €8,000 today), it was still affordable for
interested groups. Compared with the situation of traditional film
cameras, these new cameras considerably lowered the initial hurdle for
media production, for video tapes were not only much cheaper than film
reels (and could be used for multiple recordings); they also made it
possible to view recorded material immediately and on location. This
enabled the production of works that were far more intuitive and
spontaneous than earlier ones. The 1970s saw the formation of many video
groups, media workshops, and other initiatives for the independent
production of electronic media. Through their own distribution,
festivals, and other channels, such groups created alternative public
spheres. The latter became especially prominent in the United States
where, at the end of the 1960s, the providers of cable networks were
legally obligated to establish public-access channels, on which citizens
were able to operate self-organized and non-commercial television
programs. This gave rise to a considerable public-access movement there,
which at one point extended across 4,000 cities and was responsible for
producing programs from and for these different
communities.[^72[]{#Page_45 type="pagebreak"
title="45"}^](#c1-note-0072){#c1-note-0072a}

What these initiatives shared in common, in Western Europe and the
United States, was their attempt to close the gap between the
consumption and production of media, to activate the public, and at
least in part to experiment with the media themselves. Non-professional
producers were empowered with the ability to control who told their
stories and how this happened. Groups that previously had no access to
the medial public sphere now had opportunities to represent themselves
and their own interests. By working together on their own productions,
such groups demystified the medium of television and simultaneously
equipped it with a critical consciousness.

Especially well received in Germany was the work of Hans Magnus
Enzensberger, who in 1970 argued (on the basis of Bertolt Brecht\'s
radio theory) in favor of distinguishing between "repressive" and
"emancipatory" uses of media. For him, the emancipatory potential of
media lay in the fact that "every receiver is \[...\] a potential
transmitter" that can participate "interactively" in "collective
production."[^73^](#c1-note-0073){#c1-note-0073a} In the same year, the
first German video group, Telewissen, debuted in public with a
demonstration in downtown Darmstadt. In 1980, at the peak of the
movement for independent video production, there were approximately a
hundred such groups throughout (West) Germany. The lack of distribution
channels, however, represented a nearly insuperable obstacle and ensured
that many independent productions were seldom viewed outside of
small-scale settings. Tapes had to be exchanged between groups through
the mail, and they were mainly shown at gatherings and events, and in
bars. The dynamic of alternative media shifted toward a small subculture
(though one networked throughout all of Europe) of pirate radio and
television broadcasters. At the beginning of the 1980s and in the space
of Radio Dreyeckland in Freiburg, which had been founded in 1977 as
Radio Verte Fessenheim, operations began at Germany\'s first pirate or
citizens\' radio station, which regularly broadcast information about
the political protest movements that had arisen against the use of
nuclear power in Fessenheim (France), Wyhl (Germany), and Kaiseraugst
(Switzerland). The epicenter of the scene, however, was located in
Amsterdam, where the group known as Rabotnik TV, which was an offshoot
[]{#Page_46 type="pagebreak" title="46"}of the squatter scene there,
would illegally feed its signal through official television stations
after their programming had ended at night (many stations then stopped
broadcasting at midnight). In 1988, the group acquired legal
broadcasting slots on the cable network and reached up to 50,000 viewers
with their weekly experimental shows, which largely consisted of footage
appropriated freely from elsewhere.[^74^](#c1-note-0074){#c1-note-0074a}
Early in 1990, the pirate television station Kanal X was created in
Leipzig; it produced its own citizens\' television programming in the
quasi-lawless milieu of the GDR before
reunification.[^75^](#c1-note-0075){#c1-note-0075a}

These illegal, independent, or public-access stations only managed to
establish themselves as real mass media to a very limited extent.
Nevertheless, they played an important role in sensitizing an entire
generation of media activists, whose opportunities expanded as the means
of production became both better and cheaper. In the name of "tactical
media," a new generation of artistic and political media activists came
together in the middle of the
1990s.[^76^](#c1-note-0076){#c1-note-0076a} They combined the "camcorder
revolution," which in the late 1980s had made video equipment available
to broader swaths of society, stirring visions of democratic media
production, with the newly arrived medium of the internet. Despite still
struggling with numerous technical difficulties, they remained constant
in their belief that the internet would solve the hitherto intractable
problem of distributing content. The transition from analog to digital
media lowered the production hurdle yet again, not least through the
ongoing development of improved software. Now, many stages of production
that had previously required professional or semi-professional expertise
and equipment could also be carried out by engaged laymen. As a
consequence, the focus of interest broadened to include not only the
development of alternative production groups but also the possibility of
a flexible means of rapid intervention in existing structures. Media --
both television and the internet -- were understood as environments in
which one could act without directly representing a reality outside of
the media. Television was analyzed down to its own legalities, which
could then be manipulated to affect things beyond the media.
Increasingly, culture jamming and the campaigns of so-called
communication guerrillas were blurring the difference between media and
political activity.[^77[]{#Page_47 type="pagebreak"
title="47"}^](#c1-note-0077){#c1-note-0077a}

This difference was dissolved entirely by a new generation of
politically motivated artists, activists, and hackers, who transferred
the tactics of civil disobedience -- blockading a building with a
sit-in, for instance -- to the
internet.[^78^](#c1-note-0078){#c1-note-0078a} When, in 1994, the
Zapatista Army of National Liberation rose up in the south of Mexico,
several media projects were created to support its mostly peaceful
opposition and to make the movement known in Europe and North America.
As part of this loose network, in 1998 the American artist collective
Electronic Disturbance Theater developed a relatively simple computer
program called FloodNet that enabled networked sympathizers to shut down
websites, such as those of the Mexican government, in a targeted and
temporary manner. The principle was easy enough: the program would
automatic­ally reload a certain website over and over again in order to
exhaust the capacities of its network
servers.[^79^](#c1-note-0079){#c1-note-0079a} The goal was not to
destroy data but rather to disturb the normal functioning of an
institution in order to draw attention to the activities and interests
of the protesters.
:::

::: {.section}
### Networks as places of action {#c1-sec-0012}

What this new generation of media activists shared in common with the
hackers and pioneers of computer networks was the idea that
communication media are spaces for agency. During the 1960s, these
programmers were also in search of alternatives. The difference during
the 1960s is that they did not pursue these alternatives in
counter-publics, but rather in alternative lifestyles and communication.
The rejection of bureaucracy as a form of social organization played a
significant role in the critique of industrial society formulated by
freedom-oriented social movements. At the beginning of the previous
century, Max Weber had still regarded bureaucracy as a clear sign of
progress toward a rational and method­ical
organization.[^80^](#c1-note-0080){#c1-note-0080a} He based this
assessment on processes that were impersonal, rule-bound, and
transparent (in the sense that they were documented with files). But
now, in the 1960s, bureaucracy was being criticized as soulless,
alienated, oppressive, non-transparent, and unfit for an increasingly
complex society. Whereas the first four of these points are in basic
agreement with Weber\'s thesis about "disenchanting" []{#Page_48
type="pagebreak" title="48"}the world, the last point represents a
radical departure from his analysis. Bureaucracies were no longer
regarded as hyper-efficient but rather as inefficient, and their size
and rule-bound nature were no longer seen as strengths but rather as
decisive weaknesses. The social bargain of offering prosperity and
security in exchange for subordination to hierarchical relations struck
many as being anything but attractive, and what blossomed instead was a
broad interest in alternative forms of coexistence. New institutions
were expected to be more flexible and more open. The desire to step away
from the system was widespread, and many (mostly young) people set about
doing exactly that. Alternative ways of life -- communes, shared
apartments, and cooperatives -- were explored in the country and in
cities. They were meant to provide the individual with greater autonomy
and the opportunity to develop his or her own unique potential. Despite
all of the differences between these concepts of life, they nevertheless
shared something of a common denominator: the promise of
reconceptualizing social institutions and the fundamentals of
coexistence, with the aim of reformulating them in such a way as to
allow everyone\'s personal potential to develop fully in the here and
now.

According to critics of such alternatives, bureaucracy was necessary in
order to organize social life as it radically reduced the world\'s
complexity by forcing it through the bottleneck of official procedures.
However, the price paid for such efficiency involved the atrophying of
human relationships, which had to be subordinated to rigid processes
that were incapable of registering unique characteristics and
differences and were unable to react in a timely manner to changing
circumstances.

In the 1960s, many countercultural attempts to find new forms of
organization placed personal and open communication at the center of
their efforts. Each individual was understood as a singular person with
untapped potential rather than a carrier of abstract and clearly defined
functions. It was soon realized, however, that every common activity and
every common decision entailed processes that were time-intensive and
communication-intensive. As soon as a group exceeded a certain size, it
became practically impossible for it to reach any consensus. As a result
of these experiences, an entire worldview emerged that propagated
"smallness" as a central []{#Page_49 type="pagebreak" title="49"}value
("small is beautiful"). It was thought that in this way society might
escape from bureaucracy with its ostensibly disastrous consequences for
humanity and the environment.[^81^](#c1-note-0081){#c1-note-0081a} But
this belief did not last for long. For, unlike the majority of European
alternative movements, the counterculture in the United States was not
overwhelmingly critical of technology. On the contrary, many actors
there sought suitable technologies for solving the practical problems of
social organization. At the end of the 1960s, a considerable amount of
attention was devoted to the field of basic technological research. This
field brought together the interests of the military, academics,
businesses, and activists from the counterculture. The common ground for
all of them was a cybernetic vision of institutions, or, in the words of
the historian Fred Turner:

::: {.extract}
a picture of humans and machines as dynamic, collaborating elements in a
single, highly fluid, socio-technical system. Within that system,
control emerged not from the mind of a commanding officer, but from the
complex, probabilistic interactions of humans, machines and events
around them. Moreover, the mechanical elements of the system in question
-- in this case, the predictor -- enabled the human elements to achieve
what all Americans would agree was a worthwhile goal. \[...\] Over the
coming decades, this second vision of benevolent man-machine systems, of
circular flows of information, would emerge as a driving force in the
establishment of the military--industrial--academic complex and as a
model of an alternative to that
complex.[^82^](#c1-note-0082){#c1-note-0082a}
:::

This complex was possible because, as a theory, cybernetics was
formulated in extraordinarily abstract terms, so much so that a whole
variety of competing visions could be associated with
it.[^83^](#c1-note-0083){#c1-note-0083a} With cybernetics as a
meta-science, it was possible to investigate the common features of
technical, social, and biological
processes.[^84^](#c1-note-0084){#c1-note-0084a} They were analyzed as
open, interactive, and information-processing systems. It was especially
consequential that cybernetics defined control and communication as the
same thing, namely as activities oriented toward informational
feedback.[^85^](#c1-note-0085){#c1-note-0085a} The heterogeneous legacy
of cybernetics and its synonymous treatment of the terms "communication"
and "control" continue to influence information technology and the
internet today.[]{#Page_50 type="pagebreak" title="50"}

The various actors who contributed to the development of the internet
shared a common interest for forms of organ­ization based on the
comprehensive, dynamic, and open exchange of information. Both on the
micro and macro level (and this is decisive at this point),
decentralized and flexible communication technologies were meant to
become the foundation of new organizational models. Militaries feared
attacks on their command and communication centers; academics wanted to
broaden their culture of autonomy, collaboration among peers, and the
free exchange of information; businesses were looking for new areas of
activity; and countercultural activists were longing for new forms of
peaceful coexistence.[^86^](#c1-note-0086){#c1-note-0086a} They all
rejected the bureaucratic model, and the counterculture provided them
with the central catchword for their alternative vision: community.
Though rather difficult to define, it was a powerful and positive term
that somehow promised the opposite of bureaucracy: humanity,
cooperation, horizontality, mutual trust, and consensus. Now, however,
humanity was expected to be reconfigured as a community in cooperation
with and inseparable from machines. And what was yearned for had become
a liberating symbiosis of man and machine, an idea that the author
Richard Brautigan was quick to mock in his poem "All Watched Over by
Machines of Loving Grace" from 1967:

::: {.poem}
::: {.lineGroup}
I like to think (and

the sooner the better!)

of a cybernetic meadow

where mammals and computers

live together in mutually

programming harmony

like pure water

touching clear sky.[^87^](#c1-note-0087){#c1-note-0087a}
:::
:::

Here, Brautigan is ridiculing both the impatience (*the sooner the
better!*) and the naïve optimism (*harmony, clear sky*) of the
countercultural activists. Primarily, he regarded the underlying vision
as an innocent but amusing fantasy and not as a potential threat against
which something had to be done. And there were also reasons to believe
that, ultimately, the new communities would be free from the coercive
nature that []{#Page_51 type="pagebreak" title="51"}had traditionally
characterized the downside of community experiences. It was thought that
the autonomy and freedom of the individual could be regained in and by
means of the community. The conditions for this were that participation
in the community had to be voluntary and that the rules of participation
had to be self-imposed. I will return to this topic in greater detail
below.

In line with their solution-oriented engineering culture and the
results-focused military funders who by and large set the agenda, a
relatively small group of computer scientists now took it upon
themselves to establish the technological foundations for new
institutions. This was not an abstract goal for the distant future;
rather, they wanted to change everyday practices as soon as possible. It
was around this time that advanced technology became the basis of social
communication, which now adopted forms that would have been
inconceivable (not to mention impracticable) without these
preconditions. Of course, effective communication technologies already
existed at the time. Large corporations had begun long before then to
operate their own computing centers. In contrast to the latter, however,
the new infrastructure could also be used by individuals outside of
established institutions and could be implemented for all forms of
communication and exchange. This idea gave rise to a pragmatic culture
of horizontal, voluntary cooperation. The clearest summary of this early
ethos -- which originated at the unusual intersection of military,
academic, and countercultural interests -- was offered by David D.
Clark, a computer scientist who for some time coordinated the
development of technical standards for the internet: "We reject: kings,
presidents and voting. We believe in: rough consensus and running
code."[^88^](#c1-note-0088){#c1-note-0088a}

All forms of classical, formal hierarchies and their methods for
resolving conflicts -- commands (by kings and presidents) and votes --
were dismissed. Implemented in their place was a pragmatics of open
cooperation that was oriented around two guiding principles. The first
was that different views should be discussed without a single individual
being able to block any final decisions. Such was the meaning of the
expression "rough consensus." The second was that, in accordance with
the classical engineering tradition, the focus should remain on concrete
solutions that had to be measured against one []{#Page_52
type="pagebreak" title="52"}another on the basis of transparent
criteria. Such was the meaning of the expression "running code." In
large part, this method was possible because the group oriented around
these principles was, internally, relatively homogeneous: it consisted
of top-notch computer scientists -- all of them men -- at respected
American universities and research centers. For this very reason, many
potential and fundamental conflicts were avoided, at least at first.
This internal homogeneity lends rather dark undertones to their sunny
vision, but this was hardly recognized at the time. Today these
undertones are far more apparent, and I will return to them below.

Not only were technical protocols developed on the basis of these
principles, but organizational forms as well. Along with the Internet
Engineering Task Force (which he directed), Clark created the so-called
Request-for-Comments documents, with which ideas could be presented to
interested members of the community and simultaneous feedback could be
collected in order to work through the ideas in question and thus reach
a rough consensus. If such a consensus could not be reached -- if, for
instance, an idea failed to resonate with anyone or was too
controversial -- then the matter would be dropped. The feedback was
organized as a form of many-to-many communication through email lists,
newsgroups, and online chat systems. This proved to be so effective that
horizontal communication within large groups or between multiple groups
could take place without resulting in chaos. This therefore invalidated
the traditional trend that social units, once they reach a certain size,
would necessarily introduce hierarchical structures for the sake of
reducing complexity and communication. In other words, the foundations
were laid for larger numbers of (changing) people to organize flexibly
and with the aim of building an open consensus. For Manuel Castells,
this combination of organizational flexibility and scalability in size
is the decisive innovation that was enabled by the rise of the network
society.[^89^](#c1-note-0089){#c1-note-0089a} At the same time, however,
this meant that forms of organization spread that could only be possible
on the basis of technologies that have formed (and continue to form)
part of the infrastructure of the internet. Digital technology and the
social activity of individual users were linked together to an
unprecedented extent. Social and cultural agendas were now directly
related []{#Page_53 type="pagebreak" title="53"}to and entangled with
technical design. Each of the four original interest groups -- the
military, scientists, businesses, and the counterculture -- implemented
new technologies to pursue their own projects, which partly complemented
and partly contradicted one another. As we know today, the first three
groups still cooperate closely with each other. To a great extent, this
has allowed the military and corporations, which are willingly supported
by researchers in need of funding, to determine the technology and thus
aspects of the social and cultural agendas that depend on it.

The software developers\' immediate environment experienced its first
major change in the late 1970s. Software, which for many had been a mere
supplement to more expensive and highly specialized hardware, became a
marketable good with stringent licensing restrictions. A new generation
of businesses, led by Bill Gates, suddenly began to label co­operation
among programmers as theft.[^90^](#c1-note-0090){#c1-note-0090a}
Previously it had been par for the course, and above all necessary, for
programmers to share software with one another. The former culture of
horizontal cooperation between developers transformed into a
hierarchical and commercially oriented relation between developers and
users (many of whom, at least at the beginning, had developed programs
of their own). For the first time, copyright came to play an important
role in digital culture. In order to survive in this environment, the
practice of open cooperation had to be placed on a new legal foundation.
Copyright law, which served to separate programmers (producers) from
users (consumers), had to be neutralized or circumvented. The first step
in this direction was taken in 1984 by the activist and programmer
Richard Stallman. Composed by Stallman, the GNU General Public License
was and remains a brilliant hack that uses the letter of copyright law
against its own spirit. This happens in the form of a license that
defines "four freedoms":

1. The freedom to run the program as you wish, for any purpose (freedom
0).
2. The freedom to study how the program works and change it so it does
your computing as you wish (freedom 1).
3. The freedom to redistribute copies so you can help your neighbor
(freedom 2).[]{#Page_54 type="pagebreak" title="54"}
4. The freedom to distribute copies of your modified versions to others
(freedom 3). By doing this you can give the whole community a chance
to benefit from your changes.[^91^](#c1-note-0091){#c1-note-0091a}

Thanks to this license, people who were personally unacquainted and did
not share a common social environment could now cooperate (freedoms 2
and 3) and simultaneously remain autonomous and unrestricted (freedoms 0
and 1). For many, the tension between the need to develop complex
software in large teams and the desire to maintain one\'s own autonomy
represented an incentive to try out new forms of
cooperation.[^92^](#c1-note-0092){#c1-note-0092a}

Stallman\'s influence was at first limited to a small circle of
programmers. In the middle of the 1980s, the goal of developing a
completely free operating system seemed a distant one. Communication
between those interested in doing so was often slow and complicated. In
part, program codes still had to be sent by mail. It was not until the
beginning of the 1990s that students in technical departments at many
universities could access the
internet.[^93^](#c1-note-0093){#c1-note-0093a} One of the first to use
these new opportunities in an innovative way was a Finnish student named
Linus Torvalds. He built upon Stallman\'s work and programmed a kernel,
which, as the most important module of an operating system, governs the
interaction between hardware and software. He published the first free
version of this in 1991 and encouraged anyone interested to give him
feedback.[^94^](#c1-note-0094){#c1-note-0094a} And it poured in.
Torvalds reacted promptly and issued new versions of his software in
quick succession. Instead of understanding his software as a finished
product, he treated it like an open-ended process. This, in turn,
motiv­ated even more developers to participate, because they saw that
their contributions were being adopted swiftly, which led to the
formation of an open community of interested programmers who swapped
ideas over the internet and continued writing software. In order to
maintain an overview of the different versions of the program, which
appeared in parallel with one another, it soon became necessary to
employ specialized platforms. The fusion of social processes --
horizontal and voluntary cooperation among developers -- and
technological platforms, which enabled this form of cooperation
[]{#Page_55 type="pagebreak" title="55"}by providing archives, filter
functions, and search capabil­ities that made it possible to organize
large amounts of data, was thus advanced even further. The programmers
were no longer primarily working on the development of the internet
itself, which by then was functioning quite reliably, but were rather
using the internet to apply their cooperative principles to other
arenas. By the end of the 1990s, the free-software movement had
established a new, internet-based form of organization and had
demonstrated its efficiency in practice: horizontal, informal
communities of actors -- voluntary, autonomous, and focused on a common
interest -- that, on the basis of high-tech infrastructure, could
include thousands of people without having to create formal hierarchies.
:::
:::

::: {.section}
From the Margins to the Center of Society {#c1-sec-0013}
-----------------------------------------

It was around this same time that the technologies in question, which
were already no longer very new, entered mainstream society. Within a
few years, the internet became part of everyday life. Three years before
the turn of the millennium, only about 6 percent of the entire German
population used the internet, often only occasionally. Three years after
the millennium, the number of users already exceeded 53 percent. Since
then, this share has increased even further. In 2014, it was more than
97 percent for people under the age of
40.[^95^](#c1-note-0095){#c1-note-0095a} Parallel to these developments,
data transfer rates increased considerably, broadband connections ousted
the need for dial-up modems, and the internet was suddenly "here" and no
longer "there." With the spread of mobile devices, especially since the
year 2007 when the first iPhone was introduced, digital communication
became available both extensively and continuously. Since then, the
internet has been ubiquitous. The amount of time that users spend online
has increased and, with the rapid ascent of social mass media such as
Facebook, people have been online in almost every situation and
circumstance in life.[^96^](#c1-note-0096){#c1-note-0096a} The internet,
like water or electricity, has become for many people a utility that is
simply taken for granted.

In a BBC survey from 2010, 80 percent of those polled believed that
internet access -- a precondition for participating []{#Page_56
type="pagebreak" title="56"}in the now dominant digital condition --
should be regarded as a fundamental human right. This idea was most
popular in South Korea (96 percent) and Mexico (94 percent), while in
Germany at least 72 percent were of the same
opinion.[^97^](#c1-note-0097){#c1-note-0097a}

On the basis of this new infrastructure, which is now relevant in all
areas of life, the cultural developments described above have been
severed from the specific historical conditions from which they emerged
and have permeated society as a whole. Expressivity -- the ability to
communicate something "unique" -- is no longer a trait of artists and
know­ledge workers alone, but rather something that is required by an
increasingly broader stratum of society and is already being taught in
schools. Users of social mass media must produce (themselves). The
development of specific, differentiated identities and the demand that
each be treated equally are no longer promoted exclusively by groups who
have to struggle against repression, existential threats, and
marginalization, but have penetrated deeply into the former mainstream,
not least because the present forms of capitalism have learned to profit
from the spread of niches and segmentation. When even conservative
parties have abandoned the idea of a "leading culture," then cultural
differences can no longer be classified by enforcing an absolute and
indisputable hierarchy, the top of which is occupied by specific
(geographical and cultural) centers. Rather, a space has been opened up
for endless negotiations, a space in which -- at least in principle --
everything can be called into question. This is not, of course, a
peaceful and egalitarian process. In addition to the practical hurdles
that exist in polarizing societies, there are also violent backlashes
and new forms of fundamentalism that are attempting once again to remove
certain religious, social, cultural, or political dimensions of
existence from the discussion. Yet these can only be understood in light
of a sweeping cultural transformation that has already reached
mainstream society.[^98^](#c1-note-0098){#c1-note-0098a} In other words,
the digital condition has become quotidian and dominant. It forms a
cultural constellation that determines all areas of life, and its
characteristic features are clearly recognizable. These will be the
focus of the next chapter.[]{#Page_57 type="pagebreak" title="57"}
:::

::: {.section .notesSet type="rearnotes"}
[]{#notesSet}Notes {#c1-ntgp-9999}
------------------

::: {.section .notesList}
[1](#c1-note-0001a){#c1-note-0001}  Kathrin Passig and Sascha Lobo,
*Internet: Segen oder Fluch* (Berlin: Rowohlt, 2012) \[--trans.\].

[2](#c1-note-0002a){#c1-note-0002}  The expression "heteronormatively
behaving" is used here to mean that, while in the public eye, the
behavior of the people []{#Page_177 type="pagebreak" title="177"}in
question conformed to heterosexual norms regardless of their personal
sexual orientations.

[3](#c1-note-0003a){#c1-note-0003}  No order is ever entirely closed
off. In this case, too, there was also room for exceptions and for
collective moments of greater cultural multiplicity. That said, the
social openness of the end of the 1920s, for instance, was restricted to
particular milieus within large cities and was accordingly short-lived.

[4](#c1-note-0004a){#c1-note-0004}  Fritz Machlup, *The Political
Economy of Monopoly: Business, Labor and Government Policies*
(Baltimore, MD: The Johns Hopkins University Press, 1952).

[5](#c1-note-0005a){#c1-note-0005}  Machlup was a student of Ludwig von
Mises, the most influential representative of this radically
individualist school. See Hans-Hermann Hoppe, "Die Österreichische
Schule und ihre Bedeutung für die moderne Wirtschaftswissenschaft," in
Karl-Dieter Grüske (ed.), *Die Gemeinwirtschaft: Kommentarband zur
Neuauflage von Ludwig von Mises' "Die Gemeinwirtschaft"* (Düsseldorf:
Verlag Wirtschaft und Finanzen, 1996), pp. 65--90.

[6](#c1-note-0006a){#c1-note-0006}  Fritz Machlup, *The Production and
Distribution of Knowledge in the United States* (New York: John Wiley &
Sons, 1962).

[7](#c1-note-0007a){#c1-note-0007}  The term "knowledge worker" had
already been introduced to the discussion a few years before; see Peter
Drucker, *Landmarks of Tomorrow: A Report on the New* (New York: Harper,
1959).

[8](#c1-note-0008a){#c1-note-0008}  Peter Ecker, "Die
Verwissenschaftlichung der Industrie: Zur Geschichte der
Industrieforschung in den europäischen und amerikanischen
Elektrokonzernen 1890--1930," *Zeitschrift für Unternehmensgeschichte*
35 (1990): 73--94.

[9](#c1-note-0009a){#c1-note-0009}  Edward Bernays was the son of
Sigmund Freud\'s sister Anna and Ely Bernays, the brother of Freud\'s
wife, Martha Bernays.

[10](#c1-note-0010a){#c1-note-0010}  Edward L. Bernays, *Propaganda*
(New York: Horace Liverlight, 1928).

[11](#c1-note-0011a){#c1-note-0011}  James Beniger, *The Control
Revolution: Technological and Economic Origins of the Information
Society* (Cambridge, MA: Harvard University Press, 1986), p. 350.

[12](#c1-note-0012a){#c1-note-0012}  Norbert Wiener, *Cybernetics: Or
Control and Communication in the Animal and the Machine* (New York: J.
Wiley, 1948).

[13](#c1-note-0013a){#c1-note-0013}  Daniel Bell, *The Coming of
Post-Industrial Society: A Venture in Social Forecasting* (New York:
Basic Books, 1973).

[14](#c1-note-0014a){#c1-note-0014}  Simon Nora and Alain Minc, *The
Computerization of Society: A Report to the President of France*
(Cambridge, MA: MIT Press, 1980).

[15](#c1-note-0015a){#c1-note-0015}  Manuel Castells, *The Rise of the
Network Society* (Oxford: Blackwell, 1996).

[16](#c1-note-0016a){#c1-note-0016}  Hans-Dieter Kübler, *Mythos
Wissensgesellschaft: Gesellschaft­licher Wandel zwischen Information,
Medien und Wissen -- Eine Einführung* (Wiesbaden: Verlag für
Sozialwissenschaften, 2009).[]{#Page_178 type="pagebreak" title="178"}

[17](#c1-note-0017a){#c1-note-0017}  Luc Boltanski and Ève Chiapello,
*The New Spirit of Capitalism*, trans. Gregory Elliott (London: Verso,
2005).

[18](#c1-note-0018a){#c1-note-0018}  Michael Piore and Charles Sabel,
*The Second Industrial Divide: Possibilities of Prosperity* (New York:
Basic Books, 1984).

[19](#c1-note-0019a){#c1-note-0019}  Castells, *The Rise of the Network
Society*. For a critical evaluation of Castells\'s work, see Felix
Stalder, *Manuel Castells and the Theory of the Network Society*
(Cambridge: Polity, 2006).

[20](#c1-note-0020a){#c1-note-0020}  "UK Creative Industries Mapping
Documents" (1998); quoted from Terry Flew, *The Creative Industries:
Culture and Policy* (Los Angeles, CA: Sage, 2012), pp. 9--10.

[21](#c1-note-0021a){#c1-note-0021}  The rise of the creative
industries, and the hope that they inspired among politicians, did not
escape criticism. Among the first works to draw attention to the
precarious nature of working in such industries was Angela McRobbie\'s
*British Fashion Design: Rag Trade or Image Industry?* (New York:
Routledge, 1998).

[22](#c1-note-0022a){#c1-note-0022}  This definition is not without a
degree of tautology, given that economic growth is based on talent,
which itself is defined by its ability to create new jobs; that is,
economic growth. At the same time, he employs the term "talent" in an
extremely narrow sense. Apparently, if something has nothing to do with
job creation, it also has nothing to do with talent or creativity. All
forms of creativity are thus measured and compared according to a common
criterion.

[23](#c1-note-0023a){#c1-note-0023}  Richard Florida, *Cities and the
Creative Class* (New York: Routledge, 2005), p. 5.

[24](#c1-note-0024a){#c1-note-0024}  One study has reached the
conclusion that, despite mass participation, "a new form of
communicative elite has developed, namely digitally and technically
versed actors who inform themselves in this way, exchange ideas and thus
gain influence. For them, the possibilities of platforms mainly
represent an expansion of useful tools. Above all, the dissemination of
digital technology makes it easier for versed and highly networked
individuals to convey their news more simply -- and, for these groups of
people, it lowers the threshold for active participation." Michael
Bauer, "Digitale Technologien und Partizipation," in Clara Landler et
al. (eds), *Netzpolitik in Österreich: Internet, Macht, Menschenrechte*
(Krems: Donau-Universität Krems, 2013), pp. 219--24, at 224
\[--trans.\].

[25](#c1-note-0025a){#c1-note-0025}  Boltanski and Chiapello, *The New
Spirit of Capitalism*.

[26](#c1-note-0026a){#c1-note-0026}  According to Wikipedia,
"Heteronormativity is the belief that people fall into distinct and
complementary genders (man and woman) with natural roles in life. It
assumes that heterosexuality is the only sexual orientation or only
norm, and states that sexual and marital relations are most (or only)
fitting between people of opposite sexes."[]{#Page_179 type="pagebreak"
title="179"}

[27](#c1-note-0027a){#c1-note-0027}  Jannis Plastargias, *RotZSchwul:
Der Beginn einer Bewegung (1971--1975)* (Berlin: Querverlag, 2015).

[28](#c1-note-0028a){#c1-note-0028}  Helmut Ahrens et al. (eds),
*Tuntenstreit: Theoriediskussion der Homosexuellen Aktion Westberlin*
(Berlin: Rosa Winkel, 1975), p. 4.

[29](#c1-note-0029a){#c1-note-0029}  Susanne Regener and Katrin Köppert
(eds), *Privat/öffentlich: Mediale Selbstentwürfe von Homosexualität*
(Vienna: Turia + Kant, 2013).

[30](#c1-note-0030a){#c1-note-0030}  Such, for instance, was the
assessment of Manfred Bruns, the spokesperson for the Lesbian and Gay
Association in Germany, in his text "Schwulenpolitik früher" (link no
longer active). From today\'s perspective, however, the main problem
with this event was the unclear position of the Green Party with respect
to pedophilia. See Franz Walter et al. (eds), *Die Grünen und die
Pädosexualität: Eine bundesdeutsche Geschichte* (Göttingen: Vandenhoeck
& Ruprecht, 2014).

[31](#c1-note-0031a){#c1-note-0031}  "AIDS: Tödliche Seuche," *Der
Spiegel* 23 (1983) \[--trans.\].

[32](#c1-note-0032a){#c1-note-0032}  Quoted from Frank Niggemeier, "Gay
Pride: Schwules Selbst­bewußtsein aus dem Village," in Bernd Polster
(ed.), *West-Wind: Die Amerikanisierung Europas* (Cologne: Dumont,
1995), pp. 179--87, at 184 \[--trans.\].

[33](#c1-note-0033a){#c1-note-0033}  Quoted from Regener and Köppert,
*Privat/öffentlich*, p. 7 \[--trans.\].

[34](#c1-note-0034a){#c1-note-0034}  Hans-Peter Buba and László A.
Vaskovics, *Benachteiligung gleichgeschlechtlich orientierter Personen
und Paare: Studie im Auftrag des Bundesministerium der Justiz* (Cologne:
Bundes­anzeiger, 2001).

[35](#c1-note-0035a){#c1-note-0035}  This process of internal
differentiation has not yet reached its conclusion, and thus the
acronyms have become longer and longer: LGBPTTQQIIAA+ stands for
lesbian, gay, bisexual, pansexual, transgender, transsexual, queer,
questioning, intersex, intergender, asexual, ally.
[36](#c1-note-0036a){#c1-note-0036}  Judith Butler, *Gender Trouble:
Feminism and the Subversion of Identity* (New York: Routledge, 1989).

[37](#c1-note-0037a){#c1-note-0037}  Andreas Krass, "Queer Studies: Eine
Einführung," in Krass (ed.), *Queer denken: Gegen die Ordnung der
Sexualität* (Frankfurt am Main: Suhrkamp, 2003), pp. 7--27.

[38](#c1-note-0038a){#c1-note-0038}  Edward W. Said, *Orientalism* (New
York: Vintage Books, 1978).

[39](#c1-note-0039a){#c1-note-0039}  Kark August Wittfogel, *Oriental
Despotism: A Comparative Study of Total Power* (New Haven, CT: Yale
University Press, 1957).

[40](#c1-note-0040a){#c1-note-0040}  Silke Förschler, *Bilder des Harem:
Medienwandel und kultereller Austausch* (Berlin: Reimer, 2010).

[41](#c1-note-0041a){#c1-note-0041}  The selection and effectiveness of
these images is not a coincidence. Camel was one of the first brands of
cigarettes for []{#Page_180 type="pagebreak" title="180"}which
advertising, in the sense described above, was used in a systematic
manner.

[42](#c1-note-0042a){#c1-note-0042}  This would not exclude feelings of
regret about the loss of an exotic and romantic way of life, such as
those of T. E. Lawrence, whose activities in the Near East during the
First World War were memorialized in the film *Lawrence of Arabia*
(1962).

[43](#c1-note-0043a){#c1-note-0043}  Said has often been criticized,
however, for portraying orientalism so dominantly that there seems to be
no way out of the existing dependent relations. For an overview of the
debates that Said has instigated, see María do Mar Castro Varela and
Nikita Dhawan, *Postkoloniale Theorie: Eine kritische Ein­führung*
(Bielefeld: Transcript, 2005), pp. 37--46.

[44](#c1-note-0044a){#c1-note-0044}  "Migration führt zu 'hybrider'
Gesellschaft" (an interview with Homi K. Bhabha), *ORF Science*
(November 9, 2007), online \[--trans.\].

[45](#c1-note-0045a){#c1-note-0045}  Homi K. Bhabha, *The Location of
Culture* (New York: Routledge, 1994), p. 4.

[46](#c1-note-0046a){#c1-note-0046}  Elisabeth Bronfen and Benjamin
Marius, "Hybride Kulturen: Einleitung zur anglo-amerikanischen
Multikulturismusdebatte," in Bronfen et al. (eds), *Hybride Kulturen*
(Tübingen: Stauffenburg), pp. 1--30, at 8 \[--trans.\].

[47](#c1-note-0047a){#c1-note-0047}  "What Is Postcolonial Thinking? An
Interview with Achille Mbembe," *Eurozine* (December 2006), online.

[48](#c1-note-0048a){#c1-note-0048}  Migrants have always created their
own culture, which deals in various ways with the experience of
migration itself, but non-migrant populations have long tended to ignore
this. Things have now begun to change in this regard, for instance
through Imra Ayata and Bülent Kullukcu\'s compilation of songs by the
Turkish diaspora of the 1970s and 1980s: *Songs of Gastarbeiter*
(Munich: Trikont, 2013).

[49](#c1-note-0049a){#c1-note-0049}  The conference programs can be
found at: \<\>.

[50](#c1-note-0050a){#c1-note-0050}  "Deutschland entwickelt sich zu
einem attraktiven Einwanderungsland für hochqualifizierte Zuwanderer,"
press release by the CDU/CSU Alliance in the German Parliament (June 4,
2014), online \[--trans.\].

[51](#c1-note-0051a){#c1-note-0051}  Andreas Reckwitz, *Die Erfindung
der Kreativität: Zum Prozess gesellschaftlicher Ästhetisierung* (Berlin:
Suhrkamp, 2011), p. 180 \[--trans.\]. An English translation of this
book is forthcoming: *The Invention of Creativity: Modern Society and
the Culture of the New*, trans. Steven Black (Cambridge: Polity, 2017).

[52](#c1-note-0052a){#c1-note-0052}  Gert Selle, *Geschichte des Design
in Deutschland* (Frankfurt am Main: Campus, 2007).

[53](#c1-note-0053a){#c1-note-0053}  "Less Is More: The Design Ethos of
Dieter Rams," *SFMOMA* (June 29, 2011), online.[]{#Page_181
type="pagebreak" title="181"}

[54](#c1-note-0054a){#c1-note-0054}  The cybernetic perspective was
introduced to the field of design primarily by Buckminster Fuller. See
Diedrich Diederichsen and Anselm Franke, *The Whole Earth: California
and the Disappearance of the Outside* (Berlin: Sternberg, 2013).

[55](#c1-note-0055a){#c1-note-0055}  Clive Dilnot, "Design as a Socially
Significant Activity: An Introduction," *Design Studies* 3/3 (1982):
139--46.

[56](#c1-note-0056a){#c1-note-0056}  Victor J. Papanek, *Design for the
Real World: Human Ecology and Social Change* (New York: Pantheon, 1972),
p. 2.

[57](#c1-note-0057a){#c1-note-0057}  Reckwitz, *Die Erfindung der
Kreativität*.

[58](#c1-note-0058a){#c1-note-0058}  B. Joseph Pine and James H.
Gilmore, *The Experience Economy: Work Is Theater and Every Business Is
a Stage* (Boston, MA: Harvard Business School Press, 1999), p. ix (the
emphasis is original).

[59](#c1-note-0059a){#c1-note-0059}  Mona El Khafif, *Inszenierter
Urbanismus: Stadtraum für Kunst, Kultur und Konsum im Zeitalter der
Erlebnisgesellschaft* (Saarbrücken: VDM Verlag Dr. Müller, 2013).

[60](#c1-note-0060a){#c1-note-0060}  Konrad Becker and Martin Wassermair
(eds), *Phantom Kulturstadt* (Vienna: Löcker, 2009).

[61](#c1-note-0061a){#c1-note-0061}  See, for example, Andres Bosshard,
*Stadt hören: Klang­spaziergänge durch Zürich* (Zurich: NZZ Libro,
2009).

[62](#c1-note-0062a){#c1-note-0062}  "An alternate realty game (ARG),"
according to Wikipedia, "is an interactive networked narrative that uses
the real world as a platform and employs transmedia storytelling to
deliver a story that may be altered by players\' ideas or actions."

[63](#c1-note-0063a){#c1-note-0063}  Eric von Hippel, *Democratizing
Innovation* (Cambridge, MA: MIT Press, 2005).

[64](#c1-note-0064a){#c1-note-0064}  It is often the case that the
involvement of users simply serves to increase the efficiency of
production processes and customer service. Many activities that were
once undertaken at the expense of businesses now have to be carried out
by the customers themselves. See Günter Voss, *Der arbeitende Kunde:
Wenn Konsumenten zu unbezahlten Mitarbeitern werden* (Frankfurt am Main:
Campus, 2005).

[65](#c1-note-0065a){#c1-note-0065}  Beniger, *The Control Revolution*,
pp. 411--16.

[66](#c1-note-0066a){#c1-note-0066}  Louis Althusser, "Ideology and
Ideological State Apparatuses (Notes towards an Investigation)," in
Althusser, *Lenin and Philosophy and Other Essays*, trans. Ben Brewster
(New York: Monthly Review Press, 1971), pp. 127--86.

[67](#c1-note-0067a){#c1-note-0067}  Florian Becker et al. (eds),
*Gramsci lesen! Einstiege in die Gefängnis­hefte* (Hamburg: Argument,
2013), pp. 20--35.

[68](#c1-note-0068a){#c1-note-0068}  Guy Debord, *The Society of the
Spectacle*, trans. Fredy Perlman and Jon Supak (Detroit: Black & Red,
1977).

[69](#c1-note-0069a){#c1-note-0069}  Derrick de Kerckhove, "McLuhan and
the Toronto School of Communication," *Canadian Journal of
Communication* 14/4 (1989): 73--9.[]{#Page_182 type="pagebreak"
title="182"}

[70](#c1-note-0070a){#c1-note-0070}  Marshall McLuhan, *Understanding
Media: The Extensions of Man* (New York: McGraw-Hill, 1964).

[71](#c1-note-0071a){#c1-note-0071}  Nam Jun Paik, "Exposition of Music
-- Electronic Television" (leaflet accompanying the exhibition). Quoted
from Zhang Ga, "Sounds, Images, Perception and Electrons," *Douban*
(March 3, 2016), online.

[72](#c1-note-0072a){#c1-note-0072}  Laura R. Linder, *Public Access
Television: America\'s Electronic Soapbox* (Westport, CT: Praeger,
1999).

[73](#c1-note-0073a){#c1-note-0073}  Hans Magnus Enzensberger,
"Constituents of a Theory of the Media," in Noah Wardrip-Fruin and Nick
Montfort (eds), *The New Media Reader* (Cambridge, MA: MIT Press, 2003),
pp. 259--75.

[74](#c1-note-0074a){#c1-note-0074}  Paul Groot, "Rabotnik TV,"
*Mediamatic* 2/3 (1988), online.

[75](#c1-note-0075a){#c1-note-0075}  Inke Arns, "Social Technologies:
Deconstruction, Subversion and the Utopia of Democratic Communication,"
*Medien Kunst Netz* (2004), online.

[76](#c1-note-0076a){#c1-note-0076}  The term was coined at a series of
conferences titled The Next Five Minutes (N5M), which were held in
Amsterdam from 1993 to 2003. See \<\>.

[77](#c1-note-0077a){#c1-note-0077}  Mark Dery, *Culture Jamming:
Hacking, Slashing and Sniping in the Empire of Signs* (Westfield: Open
Media, 1993); Luther Blisset et al., *Handbuch der
Kommunikationsguerilla*, 5th edn (Berlin: Assoziationen A, 2012).

[78](#c1-note-0078a){#c1-note-0078}  Critical Art Ensemble, *Electronic
Civil Disobedience and Other Unpopular Ideas* (New York: Autonomedia,
1996).

[79](#c1-note-0079a){#c1-note-0079}  Today this method is known as a
"distributed denial of service attack" (DDOS).

[80](#c1-note-0080a){#c1-note-0080}  Max Weber, *Economy and Society: An
Outline of Interpretive Sociology*, trans. Guenther Roth and Claus
Wittich (Berkeley, CA: University of California Press, 1978), pp. 26--8.

[81](#c1-note-0081a){#c1-note-0081}  Ernst Friedrich Schumacher, *Small
Is Beautiful: Economics as if People Mattered*, 8th edn (New York:
Harper Perennial, 2014).

[82](#c1-note-0082a){#c1-note-0082}  Fred Turner, *From Counterculture
to Cyberculture: Stewart Brand, the Whole Earth Movement and the Rise of
Digital Utopianism* (Chicago, IL: University of Chicago Press, 2006), p.
21. In this regard, see also the documentary films *Das Netz* by Lutz
Dammbeck (2003) and *All Watched Over by Machines of Loving Grace* by
Adam Curtis (2011).

[83](#c1-note-0083a){#c1-note-0083}  It was possible to understand
cybernetics as a language of free markets or also as one of centralized
planned economies. See Slava Gerovitch, *From Newspeak to Cyberspeak: A
History of Soviet Cybernetics* (Cambridge, MA: MIT Press, 2002). The
great interest of Soviet scientists in cybernetics rendered the term
rather suspicious in the West, where it was disassociated from
artificial intelligence.[]{#Page_183 type="pagebreak" title="183"}

[84](#c1-note-0084a){#c1-note-0084}  Claus Pias, "The Age of
Cybernetics," in Pias (ed.), *Cybernetics: The Macy Conferences
1946--1953* (Zurich: Diaphanes, 2016), pp. 11--27.

[85](#c1-note-0085a){#c1-note-0085}  Norbert Wiener, one of the
cofounders of cybernetics, explained this as follows in 1950: "In giving
the definition of Cybernetics in the original book, I classed
communication and control together. Why did I do this? When I
communicate with another person, I impart a message to him, and when he
communicates back with me he returns a related message which contains
information primarily accessible to him and not to me. When I control
the actions of another person, I communicate a message to him, and
although this message is in the imperative mood, the technique of
communication does not differ from that of a message of fact.
Furthermore, if my control is to be effective I must take cognizance of
any messages from him which may indicate that the order is understood
and has been obeyed." Norbert Wiener, *The Human Use of Human Beings:
Cybernetics and Society*, 2nd edn (London: Free Association Books,
1989), p. 16.

[86](#c1-note-0086a){#c1-note-0086}  Though presented here as distinct,
these interests could in fact be held by one and the same person. In
*From Counterculture to Cyberculture*, for instance, Turner discusses
countercultural entrepreneurs.
[87](#c1-note-0087a){#c1-note-0087}  Richard Brautigan, "All Watched
Over by Machines of Loving Grace," in *All Watched Over by Machines of
Loving Grace*, by Brautigan (San Francisco: The Communication Company,
1967).

[88](#c1-note-0088a){#c1-note-0088}  David D. Clark, "A Cloudy Crystal
Ball: Visions of the Future," *Internet Engineering Taskforce* (July
1992), online.

[89](#c1-note-0089a){#c1-note-0089}  Castells, *The Rise of the Network
Society*.

[90](#c1-note-0090a){#c1-note-0090}  Bill Gates, "An Open Letter to
Hobbyists," *Homebrew Computer Club Newsletter* 2/1 (1976): 2.

[91](#c1-note-0091a){#c1-note-0091}  Richard Stallman, "What Is Free
Software?", *GNU Operating System*, online.

[92](#c1-note-0092a){#c1-note-0092}  The fundamentally cooperative
nature of programming was recognized early on. See Gerald M. Weinberg,
*The Psychology of Computer Programming*, rev. edn (New York: Dorset
House, 1998 \[originally published in 1971\]).

[93](#c1-note-0093a){#c1-note-0093}  On the history of free software,
see Volker Grassmuck, *Freie Software: Zwischen Privat- und
Gemeineigentum* (Berlin: Bundeszentrale für politische Bildung, 2002).

[94](#c1-note-0094a){#c1-note-0094}  In his first email on the topic, he
wrote: "Hello everybody out there \[...\]. I'm doing a (free) operating
system (just a hobby, won\'t be big and professional like gnu) \[...\].
This has been brewing since April, and is starting to get ready. I\'d
like any feedback on things people like/dislike." Linus Torvalds, "What
[]{#Page_184 type="pagebreak" title="184"}Would You Like to See Most in
Minix," *Usenet Group* (August 1991), online.

[95](#c1-note-0095a){#c1-note-0095}  ARD/ZDF, "Onlinestudie" (2015),
online.

[96](#c1-note-0096a){#c1-note-0096}  From 1997 to 2003, the average use
of online media in Germany climbed from 76 to 138 minutes per day, and
by 2013 it reached 169 minutes. Over the same span of time, the average
frequency of use increased from 3.3 to 4.4 days per week, and by 2013 it
was 5.8. From 2007 to 2013, the percentage of people who were members of
private social networks like Facebook grew from 15 percent to 46
percent. Of these, nearly 60 percent -- around 19 million people -- used
such services on a daily basis. The source of this information is the
article cited in the previous note.

[97](#c1-note-0097a){#c1-note-0097}  "Internet Access Is 'a Fundamental
Right'," *BBC News* (8 March 2010), online.

[98](#c1-note-0098a){#c1-note-0098}  Manuel Castells, *The Power of
Identity* (Oxford: Blackwell, 1997), pp. 7--22.
:::
:::

[II]{.chapterNumber} [Forms]{.chapterTitle} {#c2}

::: {.section}
With the emergence of the internet around the turn of the millennium as
an omnipresent infrastructure for communication and coordination,
previously independent cultural developments began to spread beyond
their specific original contexts, mutually influencing and enhancing one
another, and becoming increasingly intertwined. Out of a disconnected
conglomeration of more or less marginalized practices, a new and
specific cultural environment thus took shape, usurping or marginalizing
an ever greater variety of cultural constellations. The following
discussion will focus on three *forms* of the digital condition; that
is, on those formal qualities that (notwithstanding all of its internal
conflicts and contradictions) lend a particular shape to this cultural
environment as a whole: *referentiality*, *communality*, and
*algorithmicity*. It is only because most of the cultural processes
operating under the digital condition are characterized by common formal
features such as these that it is reasonable to speak of the digital
condition in the singular.

"Referentiality" is a method with which individuals can inscribe
themselves into cultural processes and constitute themselves as
producers. Understood as shared social meaning, the arena of culture
entails that such an undertaking cannot be limited to the individual.
Rather, it takes place within a larger framework whose existence and
development depend on []{#Page_58 type="pagebreak" title="58"}communal
formations. "Algorithmicity" denotes those aspects of cultural processes
that are (pre-)arranged by the activities of machines. Algorithms
transform the vast quantities of data and information that characterize
so many facets of present-day life into dimensions and formats that can
be registered by human perception. It is impossible to read the content
of billions of websites. Therefore we turn to services such as Google\'s
search algorithm, which reduces the data flood ("big data") to a
manageable amount and translates it into a format that humans can
understand ("small data"). Without them, human beings could not
comprehend or do anything within a culture built around digital
technologies, but they influence our understanding and activity in an
ambivalent way. They create new dependencies by pre-sorting and making
the (informational) world available to us, yet simultaneously ensure our
autonomy by providing the preconditions that enable us to act.
:::

::: {.section}
Referentiality {#c2-sec-0002}
--------------

In the digital condition, one of the methods (if not *the* most
fundamental method) enabling humans to participate -- alone or in groups
-- in the collective negotiation of meaning is the system of creating
references. In a number of arenas, referential processes play an
important role in the assignment of both meaning and form. According to
the art historian André Rottmann, for instance, "one might claim that
working with references has in recent years become the dominant
production-aesthetic model in contemporary
art."[^1^](#c2-note-0001){#c2-note-0001a} This burgeoning engagement
with references, however, is hardly restricted to the world of
contemporary art. Referentiality is a feature of many processes that
encompass the operations of various genres of professional and everyday
culture. In its essence, it is the use of materials that are already
equipped with meaning -- as opposed to so-called raw material -- to
create new meanings. The referential techniques used to achieve this are
extremely diverse, a fact reflected in the numerous terms that exist to
describe them: re-mix, re-make, re-enactment, appropriation, sampling,
meme, imitation, homage, tropicália, parody, quotation, post-production,
re-performance, []{#Page_59 type="pagebreak" title="59"}camouflage,
(non-academic) research, re-creativity, mashup, transformative use, and
so on.

These processes have two important aspects in common: the
recognizability of the sources and the freedom to deal with them however
one likes. The first creates an internal system of references from which
meaning and aesthetics are derived in an essential
manner.[^2^](#c2-note-0002){#c2-note-0002a} The second is the
precondition enabling the creation of something that is both new and on
the same level as the re-used material. This represents a clear
departure from the historical--critical method, which endeavors to embed
a source in its original context in order to re-determine its meaning,
but also a departure from classical forms of rendition such as
translations, adaptations (for instance, adapting a book for a film), or
cover versions, which, though they translate a work into another
language or medium, still attempt to preserve its original meaning.
Re-mixes produced by DJs are one example of the referential treatment of
source material. In his book on the history of DJ culture, the
journalist Ulf Poschardt notes: "The remixer isn\'t concerned with
salvaging authenticity, but with creating a new
authenticity."[^3^](#c2-note-0003){#c2-note-0003a} For instead of
distancing themselves from the past, which would follow the (Western)
logic of progress or the spirit of the avant-garde, these processes
refer explicitly to precursors and to existing material. In one and the
same gesture, both one\'s own new position and the context and cultural
tradition that is being carried on in one\'s own work are constituted
performatively; that is, through one\'s own activity in the moment. I
will discuss this phenomenon in greater depth below.

To work with existing cultural material is, in itself, nothing new. In
modern montages, artists likewise drew upon available texts, images, and
treated materials. Yet there is an important difference: montages were
concerned with bringing together seemingly incongruous but stable
"finished pieces" in a more or less unmediated and fragmentary manner.
This is especially clear in the collages by the Dadaists or in
Expressionist literature such as Alfred Döblin\'s *Berlin
Alexanderplatz*. In these works, the experience of Modernity\'s many
fractures -- its fragmentation and turmoil -- was given a new aesthetic
form. In his reference to montages, Adorno thus observed that the
"negation of synthesis becomes a principle []{#Page_60 type="pagebreak"
title="60"}of form."[^4^](#c2-note-0004){#c2-note-0004a} At least for a
brief moment, he considered them an adequate expression for the
impossibility of reconciling the contradictions of capitalist culture.
Influenced by Adorno, the literary theorist Peter Bürger went so far as
to call the montage the true "paradigm of
modernity."[^5^](#c2-note-0005){#c2-note-0005a} In today\'s referential
processes, on the contrary, pieces are not brought together as much as
they are integrated into one another by being altered, adapted, and
transformed. Unlike the older arrangement, it is not the fissures
between elements that are foregrounded but rather their synthesis in the
present. Conchita Wurst, the bearded diva, is not torn between two
conflicting poles. Rather, she represents a successful synthesis --
something new and harmonious that distinguishes itself by showcasing
elements of the old order (man/woman) and simultaneously transcending
them.

This synthesis, however, is usually just temporary, for at any time it
can itself serve as material for yet another rendering. Of course, this
is far easier to pull off with digital objects than with analog objects,
though these categories have become increasingly porous and thus
increasingly problematic as opposites. More and more objects exist both
in an analog and in a digital form. Think of photographs and slides,
which have become so easy to digitalize. Even three-dimensional objects
can now be scanned and printed. In the future, programmable materials
with controllable and reversible features will cause the difference
between the two domains to vanish: analog is becoming more and more
digital.

Montages and referential processes can only become widespread methods
if, in a given society, cultural objects are available in three
different respects. The first is economic and organizational: they must
be affordable and easily accessible. Whoever is unable to afford books
or get hold of them by some other means will not be able to reconfigure
any texts. The second is cultural: working with cultural objects --
which can always create deviations from the source in unpredictable ways
-- must not be treated as taboo or illegal, but rather as an everyday
activity without any special preconditions. It is much easier to
manipulate a text from a secular newspaper than one from a religious
canon. The third is material: it must be possible to use the material
and to change it.[^6[]{#Page_61 type="pagebreak"
title="61"}^](#c2-note-0006){#c2-note-0006a}

In terms of this third form of availability, montages differ from
referential processes, for cultural objects can be integrated into one
another -- instead of simply being placed side by side -- far more
readily when they are digitally coded. Information is digitally coded
when it is stored by means of a limited system of discrete (that is,
separated by finite intervals or distances) signs that are meaningless
in themselves. This allows information to be copied from one carrier to
another without any loss and it allows the respective signs, whether
individually or in groups, to be arranged freely. Seen in this way,
digital coding is not necessarily bound to computers but can rather be
realized with all materials: a mosaic is a digital process in which
information is coded by means of variously colored tiles, just as a
digital image consists of pixels. In the case of the mosaic, of course,
the resolution is far lower. Alphabetic writing is a form of coding
linguistic information by means of discrete signs that are, in
themselves, meaningless. Consequently, Florian Cramer has argued that
"every form of literature that is recorded alphabetically and not based
on analog parameters such as ideograms or orality is already digital in
that it is stored in discrete
signs."[^7^](#c2-note-0007){#c2-note-0007a} However, the specific
features of the alphabet, as Marshall McLuhan repeatedly underscored,
did not fully develop until the advent of the printing
press.[^8^](#c2-note-0008){#c2-note-0008a} It was the printing press, in
other words, that first abstracted written signs from analog handwriting
and transformed them into standardized symbols that could be repeated
without any loss of information. In this practical sense, the printing
press made writing digital, with the result that dealing with texts soon
became radically different.

::: {.section}
### Information overload 1.0 {#c2-sec-0003}

The printing press made texts available in the three respects mentioned
above. For one thing, their number increased rapidly, while their price
significantly sank. During the first two generations after Gutenberg\'s
invention -- that is, between 1450 and 1500 -- more books were produced
than during the thousand years
before.[^9^](#c2-note-0009){#c2-note-0009a} And that was just the
beginning. Dealing with books and their content changed from the ground
up. In manuscript culture, every new copy represented a potential
degradation of the original, and therefore []{#Page_62 type="pagebreak"
title="62"}the oldest sources (those that had undergone as little
corruption as possible) were valued above all. With the advent of print
culture, the idea took hold that texts could be improved by the process
of editing, not least because the availability of old sources, through
reprints and facsimiles, had also improved dramatically. Pure
reproduction was mechanized and overcome as a cultural challenge.

According to the historian Elizabeth Eisenstein, one of the first
consequences of the greatly increased availability of the printed book
was that it overcame the "tyranny of major authorities, which was common
in small libraries."[^10^](#c2-note-0010){#c2-note-0010a} Scientists
were now able to compare texts with one another and critique them to an
unprecedented extent. Their general orientation turned around: instead
of looking back in order to preserve what they knew, they were now
looking ahead toward what they might not (yet) know.

In order to organize this information flood of rapidly amassing texts,
it was necessary to create new conventions: books were now specified by
their author, publisher, and date of publication, not to mention
furnished with page numbers. This enabled large numbers of texts to be
catalogued and every individual text -- indeed, every single passage --
to be referenced.[^11^](#c2-note-0011){#c2-note-0011a} Scientists could
legitimize the pursuit of new knowledge by drawing attention to specific
mistakes or gaps in existing texts. In the scientific culture that was
developing at the time, the close connection between old and new
ma­terial was not simply regarded as something positive; it was also
urgently prescribed as a method of argumentation. Every text had to
contain an internal system of references, and this was the basis for the
development of schools, disciplines, and specific discourses.

The digital character of printed writing also made texts available in
the third respect mentioned above. Because discrete signs could be
reproduced without any loss of information, it was possible not only to
make perfect copies but also to remove content from one carrier and
transfer it to another. Materials were no longer simply arranged
sequentially, as in medieval compilations and almanacs, but manipulated
to give rise to a new and independent fluid text. A set of conventions
was developed -- one that remains in use today -- for modifying embedded
or quoted material in order for it []{#Page_63 type="pagebreak"
title="63"}to fit into its new environment. In this manner, quotations
could be altered in such a way that they could be integrated seamlessly
into a new text while remaining recognizable as direct citations.
Several of these conventions, for instance the use of square brackets to
indicate additions ("\[ \]") or ellipses to indicate omissions ("..."),
are also used in this very book. At the same time, the conventions for
making explicit references led to the creation of an internal reference
system that made the singular position of the new text legible within a
collective field of work. "Printing," to quote Elizabeth Eisenstein once
again, "encouraged forms of combinatory activity which were social as
well as intellectual. It changed relationships between men of learning
as well as between systems of
ideas."[^12^](#c2-note-0012){#c2-note-0012a} Exchange between scholars,
in the form of letters and visits, intensified. The seventeenth century
saw the formation of the *respublica literaria* or the "Republic of
Letters," a loose network of scholars devoted to promoting the ideas of
the Enlightenment. Beginning in the eighteenth century, the rapidly
growing number of scientific fields was arranged and institutionalized
into clearly distinct disciplines. In the nineteenth and twentieth
centuries, diverse media-technical innovations made images, sounds, and
moving images available, though at first only in analog formats. These
created the preconditions that enabled the montage in all of its forms
-- film cuts, collages, readymades, *musique concrète*, found-footage
films, literary cut-ups, and artistic assemblages (to name only the
best-known genres) -- to become the paradigm of Modernity.
:::

::: {.section}
### Information overload 2.0 {#c2-sec-0004}

It was not until new technical possibilities for recording, storing,
processing, and reproduction appeared over the course of the 1990s that
it also became increasingly possible to code and edit images, audio, and
video digitally. Through the networking that was taking place not far
behind, society was flooded with an unprecedented amount of digit­ally
coded information *of every sort*, and the circulation of this
information accelerated. This was not, however, simply a quantitative
change but also and above all a qualitative one. Cultural materials
became available in a comprehensive []{#Page_64 type="pagebreak"
title="64"}sense -- economically and organizationally, culturally
(despite legal problems), and materially (because digitalized). Today it
would not be bold to predict that nearly every text, image, or sound
will soon exist in a digital form. Most of the new reproducible works
are already "born digital" and digit­ally distributed, or they are
physically produced according to digital instructions. Many initiatives
are working to digitalize older, analog works. We are now anchored in
the digital.

Among the numerous digitalization projects currently under way, the most
ambitious is that of Google Books, which, since its launch in 2004, has
digitalized around 20 million books from the collections of large
libraries and prepared them for full-text searches. Right from the
start, a fierce debate arose about the legal and cultural acceptability
of this project. One concern was whether Google\'s process infringed
upon the rights of the authors and publishers of the scanned books or
whether, according to American law, it qualified as "fair use," in which
case there would be no obligation for the company to seek authorization
or offer compensation. The second main concern was whether it would be
culturally or politically appropriate for a private corporation to hold
a de facto monopoly over the digital heritage of book culture. The first
issue incited a complex legal battle that, in 2013, was decided in
Google\'s favor by a judge on the United States District Court in New
York.[^13^](#c2-note-0013){#c2-note-0013a} At the heart of the second
issue was the question of how a public library should look in the
twenty-first century.[^14^](#c2-note-0014){#c2-note-0014a} In November
of 2008, the European Commission and the cultural minister of the
European Union launched the virtual Europeana library, which occurred
after a number of European countries had already invested hundreds of
millions of euros in various digitalization
initiatives.[^15^](#c2-note-0015){#c2-note-0015a} Today, Europeana
serves as a common access point to the online archives of around 2,500
European cultural institutions. By the end of 2015, its digital holdings
had grown to include more than 40 million objects. This is still,
however, a relatively small number, for it has been estimated that
European archives and museums contain more than 220 million
natural-historical and more than 260 million cultural-historical
objects. In the United States, discussions about the future of libraries
[]{#Page_65 type="pagebreak" title="65"}led to the 2013 launch of the
Digital Public Library of America (DPLA), which, like Europeana,
provides common access to the digitalized holdings of archives, museums,
and libraries. By now, more than 14 million items can be viewed there.

In one way or another, however, both the private and the public projects
of this sort have been limited by binding copyright laws. The librarian
and book historian Robert Darnton, one of the most prominent advocates
of the Digital Public Library of America, has accordingly stated: "The
main impediment to the DPLA\'s growth is legal, not financial. Copyright
laws could exclude everything published after 1964, most works published
after 1923, and some that go back as far as
1873."[^16^](#c2-note-0016){#c2-note-0016a} The legal situation in
Europe is similar to that in the United States. It, too, massively
obstructs the work of public
institutions.[^17^](#c2-note-0017){#c2-note-0017a} In many cases, this
has had the absurd consequence that certain materials, though they have
been fully digitalized, may only be accessed in part or exclusively
inside the facilities of a particular institution. Whereas companies
such as Google can afford to wage long legal battles, and in the
meantime create precedents, public institutions must proceed with great
caution, not least to avoid the accusation of using public funds to
violate copyright laws. Thus, they tend to fade into the background and
leave users, who are unfamiliar with the complex legal situation, with
the impression that they are even more out-of-date than they often are.

Informal actors, who explicitly operate beyond the realm of copyright
law, are not faced with such restrictions. UbuWeb, for instance, which
is the largest online archive devoted to the history of
twentieth-century avant-garde art, was not created by an art museum but
rather by the initiative of an individual artist, Kenneth Goldsmith.
Since 1996, he has been collecting historically relevant materials that
were no longer in distribution and placing them online for free and
without any stipulations. He forgoes the process of obtaining the rights
to certain works of art because, as he remarks on the website, "Let\'s
face it, if we had to get permission from everyone on UbuWeb, there
would be no UbuWeb."[^18^](#c2-note-0018){#c2-note-0018a} It would
simply be too demanding to do so. Because he pursues the project without
any financial interest and has saved so much []{#Page_66
type="pagebreak" title="66"}from oblivion, his efforts have provoked
hardly any legal difficulties. On the contrary, UbuWeb has become so
important that Goldsmith has begun to receive more and more material
directly from artists and their heirs, who would like certain works not
to be forgotten. Nevertheless, or perhaps for this very reason,
Goldsmith repeatedly stresses the instability of his archive, which
could disappear at any moment if he loses interest in maintaining it or
if something else happens. Users are therefore able to download works
from UbuWeb and archive, on their own, whatever items they find most
important. Of course, this fragility contradicts the idea of an archive
as a place for long-term preservation. Yet such a task could only be
undertaken by an institution that is oriented toward the long term.
Because of the existing legal conditions, however, it is hardly likely
that such an institution will come about.

Whereas Goldsmith is highly adept at operating within a niche that not
only tolerates but also accepts the violation of formal copyright
claims, large websites responsible for the uncontrolled dissemination of
digital content do not bother with such niceties. Their purpose is
rather to ensure that all popular content is made available digitally
and for free, whether legally or not. These sites, too, have experienced
uninterrupted growth. By the end of 2015, dozens of millions of people
were simultaneously using the BitTorrent tracker The Pirate Bay -- the
largest nodal point for file-sharing networks during the last decade --
to exchange several million digital files with one
another.[^19^](#c2-note-0019){#c2-note-0019a} And this was happening
despite protracted attempts to block or close down the file-sharing site
by legal means and despite a variety of competing services. Even when
the founders of the website were sentenced in Sweden to pay large fines
(around €3 million) and to serve time in prison, the site still did not
disappear from the internet.[^20^](#c2-note-0020){#c2-note-0020a} At the
same time, new providers have entered the market of free access; their
method is not to facilitate distributed downloads but rather to offer,
on account of the drastically reduced cost of data transfers, direct
streaming. Although some of these services are relatively easy to locate
and some have been legally banned -- the best-known case in Germany
being that of the popular site kino.to -- more of them continue to
appear.[^21^](#c2-note-0021){#c2-note-0021a} Moreover, this phenomenon
[]{#Page_67 type="pagebreak" title="67"}is not limited to music and
films, but encompasses all media formats. For instance, it is
foreseeable that the number of freely available plans for 3D objects
will increase along with the popularity of 3D printing. It has almost
escaped notice, however, that so-called "shadow libraries" have been
popping up everywhere; the latter are not accessible to the public but
rather to members, for instance, of closed exchange platforms or of
university intranets. Few seminars take place any more without a corpus
of scanned texts, regardless of whether this practice is legal or
not.[^22^](#c2-note-0022){#c2-note-0022a}

The lines between these different mechanisms of access are highly
permeable. Content acquired legally can make its way to file-sharing
networks as an illegal copy; content available for free can be sold in
special editions; content from shadow libraries can make its way to
publicly accessible sites; and, conversely, content that was once freely
available can disappear into shadow libraries. As regards free access,
the details of this rapidly changing landscape are almost
inconsequential, for the general trend that has emerged from these
various dynamics -- legal and illegal, public and private -- is
unambiguous: in a comprehensive and practical sense, cultural works of
all sorts will become freely available despite whatever legal and
technical restrictions might be in place. Whether absolutely all
material will be made available in this way is not the decisive factor,
at least not for the individual, for, as the German Library Association
has stated, "it is foreseeable that non-digitalized material will
increasingly escape the awareness of users, who have understandably come
to appreciate the ubiquitous availability and more convenient
processability of the digital versions of analog
objects."[^23^](#c2-note-0023){#c2-note-0023a} In this context of excess
information, it is difficult to determine whether a particular work or a
crucial reference is missing, given that a multitude of other works and
references can be found in their place.

At the same time, prodigious amounts of new material are being produced
that, before the era of digitalization and networks, never could have
existed at all or never would have left the private sphere. An example
of this is amateur photography. This is nothing new in itself; as early
as 1899, Kodak was marketing its films and apparatus with the slogan
"You press the button, we do the rest," and ever since, []{#Page_68
type="pagebreak" title="68"}drawers and albums have been overflowing
with photographs. With the advent of digitalization, however, certain
economic and material limitations ceased to exist that, until then, had
caused most private photographers to think twice about how many shots
they wanted to take. After all, they had to pay for the film to be
developed and then store the pictures somewhere. Cameras also became
increasingly "intelligent," which improved the technical quality of
photo­graphs. Even complex procedures such as increasing the level of
detail or the contrast ratio -- the difference between an image\'s
brightest and darkest points -- no longer require any specialized
knowledge of photochemical processes in the darkroom. Today, such
features are often pre-installed in many cameras as an option (high
dynamic range). Ever since the introduction of built-in digital cameras
for smartphones, anyone with such a device can take pictures everywhere
and at any time and then store them digitally. Images can then be posted
on online platforms and shared with others. By the middle of 2015,
Flickr -- the largest but certainly not the only specialized platform of
this sort -- had more than 112 million registered users participating in
more than 2 million groups. Every user has access to free storage space
for about half a million of his or her own pictures. At that point, in
other words, the platform was equipped to manage more than 55 billion
photographs. Around 3.5 million images were being uploaded every day,
many of which could be accessed by anyone. This may seem like a lot, but
in reality it is just a small portion of the pictures that are posted
online on a daily basis. Around that same time -- again, the middle of
2015 -- approximately 350 million pictures were being posted on Facebook
*every day*. The total number of photographs saved there has been
estimated to be 250 billion. In addition, there are also large platforms
for professional "stock photos" (supplies of pre-produced images that
are supposed to depict generic situations) and the databanks of
professional agencies such Getty Images or Corbis. All of these images
can be found easily and acquired quickly (though not always for free).
Yet photography is not unique in this regard. In all fields, the number
of cultural artifacts available to the public on specialized platforms
has been increasing rapidly in recent years.[]{#Page_69 type="pagebreak"
title="69"}
:::

::: {.section}
### The great disorder {#c2-sec-0005}

The old orders that had been responsible for filtering, organ­izing, and
publishing cultural material -- culture industries, mass media,
libraries, museums, archives, etc. -- are incapable of managing almost
any aspect of this deluge. They can barely function as gatekeepers any
more between those realms that, with their help, were once defined as
"private" and "public." Their decisions about what is or is not
important matter less and less. Moreover, having already been subjected
to a decades-long critique, their rules, which had been relatively
binding and formative over long periods of time, are rapidly losing
practical significance.

Even Europeana, a relatively small project based on trad­itional museums
and archives and with a mandate to make the European cultural heritage
available online, has contributed to the disintegration of established
orders: it indiscriminately brings together 2,500 previously separated
institutions. The specific semantic contexts that formerly shaped the
history and orientation of institutions have been dissolved or reduced
to dry meta-data, and millions upon millions of cultural artifacts are
now equidistant from one another. Instead of certain artifacts being
firmly anchored in a location, for instance in an ethnographic
collection devoted to the colonial history of France, it is now possible
for everything to exist side by side. Europeana is not an archive in the
traditional sense, or even a museum with a fixed and meaningful order;
rather, it is just a standard database. Everything in it is just one
search request away, and every search generates a unique order in the
form of a sequence of visible artifacts. As a result, individual objects
are freed from those meta-narratives, created by the museums and
archives that preserve them, which situate them within broader contexts
and assign more or less clear meanings to them. They consequently become
more open to interpretation. A search result does not articulate an
interpretive field of reference but merely a connection, created by
constantly changing search algorithms, between a request and the corpus
of material, which is likewise constantly changing.

Precisely because it offers so many different approaches to more or less
freely combinable elements of information, []{#Page_70 type="pagebreak"
title="70"}the order of the database no longer really provides a
framework for interpreting search results in a meaningful way.
Al­together, the meaning of many objects and signs is becoming even more
uncertain. On the one hand, this is because the connection to their
original context is becoming fragile; on the other hand, it is because
they can appear in every possible combination and in the greatest
variety of reception contexts. In less official archives and in less
specialized search engines, the dissolution of context is far more
pronounced than it is in the case of the Europeana project. For the sake
of orienting its users, for instance, YouTube provides the date when a
video has been posted, but there is no indication of when a video was
actually produced. Further information provided about a video, for
example in the comments section, is essentially unreliable. It might be
true -- or it might not. The internet researcher David Weinberger has
called this the "new digital disorder," which, at least for many users,
is an entirely apt description.[^24^](#c2-note-0024){#c2-note-0024a} For
individuals, this disorder has created both the freedom to establish
their own orders and the obligation of doing so, regardless of whether
or not they are ready for the task.

This tension between freedom and obligation is at its strongest online,
where the excess of culture and its more or less free availability are
immediate and omnipresent. In fact, everything that can be retrieved
online is culture in the sense that everything -- from the deepest layer
of hardware to the most superficial tweet -- has been made by someone
with a particular intention, and everything has been made to fit a
particular order. And it is precisely this excess of often contradictory
meanings and limited, regional, and incompatible orders that leads to
disorder and meaninglessness. This is not limited to the online world,
however, because the latter is not self-contained. In an essential way,
digital media also serve to organize the material world. On the basis of
extremely complex and opaque yet highly efficient logistical and
production processes, people are also confronted with constantly
changing material things about whose origins and meanings they have
little idea. Even something as simple to produce as yoghurt usually has
a thousand kilometers behind it before it ends up on a shelf in the
supermarket. The logistics that enable this are oriented toward
flexibility; []{#Page_71 type="pagebreak" title="71"}they bring elements
together as efficiently as possible. It is nearly impossible for final
customers to find out anything about the ingredients. Customers are
merely supposed to be oriented by signs and notices such as "new" or "as
before," "natural," and "healthy," which are written by specialists and
meant to manipulate shoppers as much as the law allows. Even here, in
corporeal everyday life, every individual has to deal with a surge of
excess and disorder that threatens to erode the original meaning
conferred on every object -- even where such meaning was once entirely
unproblematic, as in the case of
yoghurt.[^25^](#c2-note-0025){#c2-note-0025a}
:::

::: {.section}
### Selecting and organizing {#c2-sec-0006}

In this situation, the creation of one\'s own system of references has
become a ubiquitous and generally accessible method for organizing all
of the ambivalent things that one encounters on a given day. Such things
are thus arranged within a specific context of meaning that also
(co)determines one\'s own relation to the world and subjective position
in it. Referentiality takes place through three types of activity, the
first being simply to attract attention to certain things, which affirms
(at least implicitly) that they are important. With every single picture
posted on Flickr, every tweet, every blog post, every forum post, and
every status update, the user is doing exactly that; he or she is
communicating to others: "Look over here! I think this is important!" Of
course, there is nothing new to filtering and allocating meaning. What
is new, however, is that these processes are no longer being carried out
primarily by specialists at editorial offices, museums, or archives, but
have become daily requirements for a large portion of the population,
regardless of whether they possess the material and cultural resources
that are necessary for the task.
:::

::: {.section}
### The loop through the body {#c2-sec-0007}

Given the flood of information that perpetually surrounds everyone, the
act of focusing attention and reducing vast numbers of possibilities
into something concrete has become a productive achievement, however
banal each of these micro-activities might seem on its own, and even if,
at first, []{#Page_72 type="pagebreak" title="72"}the only concern might
be to focus the attention of the person doing it. The value of this
(often very brief) activity is that it singles out elements from the
uniform sludge of unmanageable complexity. Something plucked out in this
way gains value because it has required the use of a resource that
cannot be reproduced, that exists outside of the world of information
and that is invariably limited for every individual: our own lifetime.
Every status update that is not machine-generated means that someone has
invested time, be it only a second, in order to point to this and not to
something else. Thus, a process of validating what exists in the excess
takes place in connection with the ultimate scarcity -- our own
lifetimes, our own bodies. Even if the value generated by this act is
minimal or diffuse, it is still -- to borrow from Gregory Bateson\'s
famous definition of information -- a difference that makes a difference
in this stream of equivalencies and
meaninglessness.[^26^](#c2-note-0026){#c2-note-0026a} This singling out
-- this use of one\'s own body to generate meaning -- does not, however,
take place by means of mere micro-activities throughout the day; it is
also a defining aspect of complex cultural strategies. In recent years,
re-enactment (that is, the re-staging of historical situ­ations and
events) has established itself as a common practice in contemporary art.
Unlike traditional re-enactments, such as those of historically
significant battles, which attempt to represent the past as faithfully
as possible, "artistic re-enactments," according to the curator Inke
Arns, "are not an affirmative confirmation of the past; rather, they are
*questionings* of the present through reaching back to historical
events," especially as they are represented in images and other forms of
documentation. Thanks to search engines and databases, such
representations are more or less always present, though in the form of
indeterminate images, ambivalent documents, and contentious
interpretations. Artists in this situation, as Arns explains,

::: {.extract}
do not ask the naïve question about what really happened outside of the
history represented in the media -- the "authenticity" beyond the images
-- instead, they ask what the images we see might mean concretely to us,
if we were to experience these situations personally. In this way the
artistic reenactment confronts the general feeling of insecurity about
the meaning []{#Page_73 type="pagebreak" title="73"}of images by using a
paradoxical approach: through erasing distance to the images and at the
same time distancing itself from the
images.[^27^](#c2-note-0027){#c2-note-0027a}
:::

This paradox manifests itself in that the images are appropriated and
sublated through the use of one\'s own body in the re-enactments. They
simultaneously refer to the past and create a new reality in the
present. In perhaps the best-known re-enactment of this type, the artist
Jeremy Deller revived, in 2001, the Battle of Orgreave, one of the
central episodes of the British miners\' strike of 1984 and 1985. This
historical event is regarded as a turning point in the protracted
conflict between Margaret Thatcher\'s government and the labor unions --
a key moment in the implementation of Great Britain\'s neoliberal
regime, which is still in effect today. In Deller\'s re-enactment, the
heart of the matter is not historical accuracy, which is always
controversial in such epoch-changing events. Rather, he focuses on the
former participants -- the miners and police officers alike, who, along
with non-professional actors, lived through the situation again -- in
order to explore both the distance from the events and their
representation in the media, as well as their ongoing biographical and
societal presence.[^28^](#c2-note-0028){#c2-note-0028a}

Elaborate practices of embodying medial images through processes of
appropriation and distancing have also found their way into popular
culture, for instance in so-called "cosplay." The term, which is a
contraction of the words "costume" and "play," was coined by a Japanese
man named Nobuyuki Takahashi. In 1984, while attending the World Science
Fiction Convention in Los Angeles, he used the word to describe the
practice of certain attendees to dress up as their favorite characters.
Participants in cosplay embody fictitious figures -- mostly from the
worlds of science fiction, comics/manga, or computer games -- by donning
home-made costumes and striking characteristic
poses.[^29^](#c2-note-0029){#c2-note-0029a} The often considerable
effort that goes into this is mostly reflected in the costumes, not in
the choreography or dramaturgy of the performance. What is significant
is that these costumes are usually not exact replicas but are rather
freely adapted by each player to represent the character as he or she
interprets it to be. Accordingly, "Cosplay is a form of appropriation
[]{#Page_74 type="pagebreak" title="74"}that transforms, actualizes and
performs an existing story in close connection to the fan\'s own
identity."[^30^](#c2-note-0030){#c2-note-0030a} This practice,
admittedly, goes back quite far in the history of fan culture, but it
has experienced a striking surge through the opportunity for fans to
network with one another around the world, to produce costumes and
images of professional quality, and to place themselves on the same
level as their (fictitious) idols. By now it has become a global
subculture whose members are active not only online but also at hundreds
of conventions throughout the world. In Germany, an annual cosplay
competition has been held since 2007 (it is organized by the Frankfurt
Book Fair and Animexx, the country\'s largest manga and anime
community). The scene, which has grown and branched out considerably
over the past few years, has slowly begun to professionalize, with
shops, books, and players who make paid appearances. Even in fan
culture, stars are born. As soon as the subculture has exceeded a
certain size, this gradual onset of commercialization will undoubtedly
lead to tensions within the community. For now, however, two of its
noteworthy features remain: the power of the desire to appropriate, in a
bodily manner, characters from vast cultural universes, and the
widespread combination of free interpretation and meticulous attention
to detail.
:::

::: {.section}
### Lineages and transformations {#c2-sec-0008}

Because of the great effort tha they require, re-enactment and cosplay
are somewhat extreme examples of singling out, appropriating, and
referencing. As everyday activities that almost take place incidentally,
however, these three practices usually do not make any significant or
lasting differences. Yet they do not happen just once, but over and over
again. They accumulate and thus constitute referentiality\'s second type
of activity: the creation of connections between the many things that
have attracted attention. In such a way, paths are forged through the
vast complexity. These paths, which can be formed, for instance, by
referring to different things one after another, likewise serve to
produce and filter meaning. Things that can potentially belong in
multiple contexts are brought into a single, specific context. For the
individual []{#Page_75 type="pagebreak" title="75"}producer, this is how
fields of attention, reference systems, and contexts of meaning are
first established. In the third step, the things that have been selected
and brought together are changed. Perhaps something is removed to modify
the meaning, or perhaps something is added that was previously absent or
unavailable. Either way, referential culture is always producing
something new.

These processes are applied both within individual works (referentiality
in a strict sense) and within currents of communication that consist of
numerous molecular acts (referentiality in a broader sense). This latter
sort of compilation is far more widespread than the creation of new
re-mix works. Consider, for example, the billionfold sequences of status
updates, which sometimes involve a link to an interesting video,
sometimes a post of a photograph, then a short list of favorite songs, a
top 10 chart from one\'s own feed, or anything else. Such methods of
inscribing oneself into the world by means of references, combinations,
or alterations are used to create meaning through one\'s own activity in
the world and to constitute oneself in it, both for one\'s self and for
others. In a culture that manifests itself to a great extent through
mediatized communication, people have to constitute themselves through
such acts, if only by posting
"selfies."[^31^](#c2-note-0031){#c2-note-0031a} Not to do so would be to
risk invisibility and being forgotten.

On this basis, a genuine digital folk culture of re-mixing and mashups
has formed in recent years on online platforms, in game worlds, but also
through cultural-economic productions of individual pieces or short
series. It is generated and maintained by innumerable people with
varying degrees of intensity and ambition. Its common feature with
trad­itional folk culture, in choirs or elsewhere, is that production
and reception (but also reproduction and creation) largely coincide.
Active participation admittedly requires a certain degree of
proficiency, interest, and engagement, but usually not any extraordinary
talent. Many classical institutions such as museums and archives have
been attempting to take part in this folk culture by setting up their
own re-mix services. They know that the "public" is no longer able or
willing to limit its engagement with works of art and cultural history
to one of quiet contemplation. At the end of 2013, even []{#Page_76
type="pagebreak" title="76"}the Deutsches Symphonie-Orchester Berlin
initiated a re-mix competition. A year earlier, the Rijksmuseum in
Amsterdam launched so-called "Rijksstudios." Since then, the museum has
made available on its website more than 200,000 high-resolution images
from its collection. Users are free to use these to create their own
re-mixes online and share them with others. Interestingly, the
Rijksmuseum does not distinguish between the work involved in
transforming existing pieces and that involved in curating its own
online gallery.

Referential processes have no beginning and no end. Any material that is
used to make something new has a pre-history of its own, even if its
traces are lost in clouds of uncertainty. Upon closer inspection, this
cloud might clear a little bit, but it is extremely uncommon for a
genuine beginning -- a *creatio ex nihilo* -- to be revealed. This
raises the question of whether there can really be something like
originality in the emphatic sense.[^32^](#c2-note-0032){#c2-note-0032a}
Regardless of the answer to this question, the fact that by now many
people select, combine, and alter objects on a daily basis has led to a
slow shift in our perception and sensibilities. In light of the
experiences that so many people are creating, the formerly exotic
theories of deconstruction suddenly seem anything but outlandish. Nearly
half a century ago, Roland Barthes defined the text as a fabric of
quotations, and this incited vehement
opposition.[^33^](#c2-note-0033){#c2-note-0033a} "But of course," one
would be inclined to say today, "that can be statistically proven
through software analysis!" Amazon identifies books by means of their
"statistically improbable phrases"; that is, by means of textual
elements that are highly unlikely to occur elsewhere. This implies, of
course, that books contain many textual elements that are highly likely
to be found in other texts, without suggesting that such elements would
have to be regarded as plagiarism.

In the Gutenberg Galaxy, with its fixation on writing, the earliest
textual document is usually understood to represent a beginning. If no
references to anything before can be identified, the text is then
interpreted as a closed entity, as a new text. Thus, fairy tales and
sagas, which are typical elements of oral culture, are still more
strongly associated with the names of those who recorded them than with
the names of those who narrated them. This does not seem very convincing
today. In recent years, literary historians have made strong []{#Page_77
type="pagebreak" title="77"}efforts to shift the focus of attention to
the people (mostly women) who actually told certain fairy tales. In
doing so, they have been able to work out to what extent the respective
narrators gave shape to specific stories, which were written down as
common versions, and to what extent these stories reflect their
narrators\' personal histories.[^34^](#c2-note-0034){#c2-note-0034a}

Today, after more than 40 years of deconstructionist theory and a change
in our everyday practices, it is no longer controversial to read works
-- even by canonical figures like Wagner or Mozart -- in such a way as
to highlight the other works, either by the artists in question or by
other artists, that are contained within
them.[^35^](#c2-note-0035){#c2-note-0035a} This is not an expression of
decreased appreciation but rather an indication that, as Zygmunt Bauman
has stressed, "The way human beings understand the world tends to be at
all times *praxeomorphic*: it is always shaped by the know-how of the
day, by what people can do and how they usually go about doing
it."[^36^](#c2-note-0036){#c2-note-0036a} And the everyday practice of
today is one of singling out, bringing together, altering, and adding.
Accordingly, not only has our view of current cultural production
shifted; our view of cultural history has shifted as well. As always,
the past is made to suit the sensibilities of the present.

As a rule, however, things that have no beginning also have no end. This
is not only because they can in turn serve as elements for other new
contexts of meaning, but also because the attention paid to the context
in which they take on specific meaning is sensitive to the work that has
to be done to maintain the context itself. Even timelessness is an
elaborate everyday business. The attempt to rescue works of art from the
ravages of time -- to preserve them forever -- means that they regularly
need to be restored. Every restoration inevit­ably stirs a debate about
whether the planned interventions are appropriate and about how to deal
with the traces of previous interventions, which, from the current
perspective, often seem to be highly problematic. Whereas, just a
generation ago, preservationists ensured that such interventions
remained visible (as articulations of the historical fissures that are
typical of Modernity), today greater emphasis is placed on reducing
their visibility and re-creating the illusion of an "original condition"
(without, however, impeding any new functionality that a piece might
have in the present). []{#Page_78 type="pagebreak" title="78"}The
historically faithful restoration of the Berlin City Palace, and yet its
repurposed function as a museum and meeting place, are typical of this
new attitude in dealing with our historical heritage.

In everyday activity, too, the never-ending necessity of this work can
be felt at all times. Here the issue is not timelessness, but rather
that the established contexts of meaning quickly become obsolete and
therefore have to be continuously affirmed, expanded, and changed in
order to maintain the relevance of the field that they define. This
lends referentiality a performative character that combines productive
and reproductive dimensions. That which is not constantly used and
renewed simply disappears. Often, however, this only means that it will
sink into an endless archive and become unrealized potential until
someone reactivates it, breathes new life into it, rouses it from its
slumber, and incorporates it into a newly relevant context of meaning.
"To be relevant," according to the artist Eran Schaerf, "things must be
recyclable."[^37^](#c2-note-0037){#c2-note-0037a}

Alone, everyone is overwhelmed by the task of having to generate meaning
against this backdrop of all-encompassing meaninglessness. First, the
challenge is too great for any individual to overcome; second, meaning
itself is only created intersubjectively. While it can admittedly be
asserted by a single person, others have to confirm it before it can
become a part of culture. For this reason, the actual subject of
cultural production under the digital condition is not the individual
but rather the next-largest unit.
:::
:::

::: {.section}
Communality {#c2-sec-0009}
-----------

As an individual, it is impossible to orient oneself within a complex
environment. Meaning -- as well as the ability to act -- can only be
created, reinforced, and altered in exchange with others. This is
nothing noteworthy; biologically and culturally, people are social
beings. What has changed historically is how people are integrated into
larger contexts, how processes of exchange are organized, and what every
individual is expected to do in order to become a fully fledged
participant in these processes. For nearly 50 years, traditional
[]{#Page_79 type="pagebreak" title="79"}institutions -- that is,
hierarchically and bureaucratically organ­ized civic institutions such
as established churches, labor unions, and political parties -- have
continuously been losing members.[^38^](#c2-note-0038){#c2-note-0038a}
In tandem with this, the overall commitment to the identities, family
values, and lifestyles promoted by these institutions has likewise been
in decline. The great mech­anisms of socialization from the late stages
of the Gutenberg Galaxy have been losing more and more of their
influence, though at different speeds and to different extents. All
told, however, explicitly and collectively normative impulses are
decreasing, while others (implicitly economic, above all) are on the
rise. According to mainstream sociology, a cause or consequence of this
is the individualization and atomization of society. As early as the
middle of the 1980s, Ulrich Beck claimed: "In the individualized society
the individual must therefore learn, on pain of permanent disadvantage,
to conceive of himself or herself as the center of action, as the
planning office with respect to his/her own biography, abil­ities,
orientations, relationships and so
on."[^39^](#c2-note-0039){#c2-note-0039a} Over the past three decades,
the dominant neoliberal political orientation, with its strong stress on
the freedom of the individual -- to realize oneself as an individual
actor in the allegedly open market and in opposition to allegedly
domineering collective mechanisms -- has radicalized these tendencies
even further. The ability to act, however, is not only a question of
one\'s personal attitude but also of material resources. And it is this
same neoliberal politics that deprives so many people of the resources
needed to take advantage of these new freedoms in their own lives. As a
result they suffer, in Ulrich Beck\'s terms, "permanent disadvantage."

Under the digital condition, this process has permeated the finest
structures of social life. Individualization, commercialization, and the
production of differences (through design, for instance) are ubiquitous.
Established civic institutions are not alone in being hollowed out;
relatively new collectives are also becoming more differentiated, a
development that I outlined above with reference to the transformation
of the gay movement into the LGBT community. Yet nevertheless, or
perhaps for this very reason, new forms of communality are being formed
in these offshoots -- in the small activities of everyday life. And
these new communal formations -- rather []{#Page_80 type="pagebreak"
title="80"}than individual people -- are the actual subjects who create
the shared meaning that we call culture.

::: {.section}
### The problem of the "community" {#c2-sec-0010}

I have chosen the rather cumbersome expression "communal formation" in
order to avoid the term "community" (*Gemeinschaft*), although the
latter is used increasingly often in discussions of digital cultures and
has played an import­ant role, from the beginning, in conceptions of
networking. Viewed analytically, however, "community" is a problematic
term because it is almost hopelessly overloaded. Particularly in the
German-speaking tradition, Ferdinand Tönnies\'s polar distinction
between "community" (*Gemeinschaft*) and "society" (*Gesellschaft*),
which he introduced in 1887, remains
influential.[^40^](#c2-note-0040){#c2-note-0040a} Tönnies contrasted two
fundamentally different and exclusive types of social relations. Whereas
community is characterized by the overlapping multidimensional nature of
social relationships, society is defined by the functional separation of
its sectors and spheres. Community embeds every individual into complex
social relationships, all of which tend to be simultaneously present. In
the traditional village community ("communities of place," in Tönnies\'s
terms), neighbors are involved with one another, for better or for
worse, both on a familiar basis and economically or religiously. Every
activity takes place on several different levels at the same time.
Communities are comprehensive social institutions that penetrate all
areas of life, endowing them with meaning. Through mutual dependency,
they create stability and security, but they also obstruct change and
hinder social mobility. Because everyone is connected with each other,
no can leave his or her place without calling into question the
arrangement as a whole. Communities are thus structurally conservative.
Because every human activity is embedded in multifaceted social
relationships, every change requires adjustments across the entire
interrelational web -- a task that is not easy to accomplish.
Accordingly, the trad­itional communities of the eighteenth and
nineteenth centuries fiercely opposed the establishment of capitalist
society. In order to impose the latter, the old community structures
were broken apart with considerable violence. This is what Marx
[]{#Page_81 type="pagebreak" title="81"}and Engels were referring to in
that famous passage from *The Communist Manifesto*: "All the settled,
age-old relations with their train of time-honoured preconceptions and
viewpoints are dissolved. \[...\] Everything feudal and fixed goes up in
smoke, everything sacred is
profaned."[^41^](#c2-note-0041){#c2-note-0041a}

The defining feature of society, on the contrary, is that it frees the
individual from such multifarious relationships. Society, according to
Tönnies, separates its members from one another. Although they
coordinate their activity with others, they do so in order to pursue
partial, short-term, and personal goals. Not only are people separated,
but so too are different areas of life. In a market-oriented society,
for instance, the economy is conceptualized as an independent sphere. It
can therefore break away from social connections to be organized simply
by limited formal or legal obligations between actors who, beyond these
obligations, have nothing else to do with one another. Costs or benefits
that inadvertently affect people who are uninvolved in a given market
transaction are referred to by economists as "externalities," and market
participants do not need to care about these because they are strictly
pursuing their own private interests. One of the consequences of this
form of social relationship is a heightened social dynamic, for now it
is possible to introduce changes into one area of life without
considering its effects on other areas. In the end, the dissolution of
mutual obligations, increased uncertainty, and the reduction of many
social connections go hand in hand with what Marx and Engels referred to
in *The Communist Manifesto* as "unfeeling hard cash."

From this perspective, the historical development looks like an
ambivalent process of modernization in which society (dynamic, but cold)
is erected over the ruins of community (static, but warm). This is an
unusual combination of romanticism and progress-oriented thinking, and
the problems with this influential perspective are numerous. There is,
first, the matter of its dichotomy; that is, its assumption that there
can only be these two types of arrangement, community and society. Or
there is the notion that the one form can be completely ousted by the
other, even though aspects of community and aspects of society exist at
the same time in specific historical situations, be it in harmony or in
conflict.[^42^](#c2-note-0042){#c2-note-0042a} []{#Page_82
type="pagebreak" title="82"}These impressions, however, which are so
firmly associated with the German concept of *Gemeinschaft*, make it
rather difficult to comprehend the new forms of communality that have
developed in the offshoots of networked life. This is because, at least
for now, these latter forms do not represent a genuine alternative to
societal types of social
connectedness.[^43^](#c2-note-0043){#c2-note-0043a} The English word
"community" is somewhat more open. The opposition between community and
society resonates with it as well, although the dichotomy is not as
clear-cut. American communitarianism, for instance, considers the
difference between community and society to be gradual and not
categorical. Its primary aim is to strengthen civic institutions and
mechanisms, and it regards community as an intermediary level between
the individual and society.[^44^](#c2-note-0044){#c2-note-0044a} But
there is a related English term, which seems even more productive for my
purposes, namely "community of practice," a concept that is more firmly
grounded in the empirical observation of concrete social relationships.
The term was introduced at the beginning of the 1990s by the social
researchers Jean Lave and Étienne Wenger. They observed that, in most
cases, professional learning (for instance, in their case study of
midwives) does not take place as a one-sided transfer of knowledge or
proficiency, but rather as an open exchange, often outside of the formal
learning environment, between people with different levels of knowledge
and experience. In this sense, learning is an activity that, though
distinguishable, cannot easily be separated from other "normal"
activities of everyday life. As Lave and Wenger stress, however, the
community of practice is not only a social space of exchange; it is
rather, and much more fundamentally, "an intrinsic condition for the
existence of knowledge, not least because it provides the interpretive
support necessary for making sense of its
heritage."[^45^](#c2-note-0045){#c2-note-0045a} Communities of practice
are thus always epistemic communities that form around certain ways of
looking at the world and one\'s own activity in it. What constitutes a
community of practice is thus the joint acquisition, development, and
preservation of a specific field of practice that contains abstract
knowledge, concrete proficiencies, the necessary material and social
resources, guidelines, expectations, and room to interpret one\'s own
activity. All members are active participants in the constitution of
this field, and this reinforces the stress on []{#Page_83
type="pagebreak" title="83"}practice. Each of them, however, brings
along different presuppositions and experiences, for their situations
are embedded within numerous and specific situations of life or work.
The processes within the community are mostly informal, and yet they are
thoroughly structured, for authority is distributed unequally and is
based on the extent to which the members value each other\'s (and their
own) levels of knowledge and experience. At first glance, then, the term
"community of practice" seems apt to describe the meaning-generating
communal formations that are at issue here. It is also somewhat
problematic, however, because, having since been subordinated to
management strategies, its use is now narrowly applied to professional
learning and managing knowledge.[^46^](#c2-note-0046){#c2-note-0046a}

From these various notions of community, it is possible to develop the
following way of looking at new types of communality: they are formed in
a field of practice, characterized by informal yet structured exchange,
focused on the generation of new ways of knowing and acting, and
maintained through the reflexive interpretation of their own activity.
This last point in particular -- the communal creation, preservation,
and alteration of the interpretive framework in which actions,
processes, and objects acquire a firm meaning and connection -- can be
seen as the central role of communal formations.

Communication is especially significant to them. Indi­viduals must
continuously communicate in order to constitute themselves within the
fields and practices, or else they will remain invisible. The mass of
tweets, updates, emails, blogs, shared pictures, texts, posts on
collaborative platforms, and databases (etc.) that are necessary for
this can only be produced and processed by means of digital
technologies. In this act of incessant communication, which is a
constitutive element of social existence, the personal desire for
self-constitution and orientation becomes enmeshed with the outward
pressure of having to be present and available to form a new and binding
set of requirements. This relation between inward motivation and outward
pressure can vary highly, depending on the character of the communal
formation and the position of the individual within it (although it is
not the individual who determines what successful communication is, what
represents a contribution to the communal formation, or in which form
one has to be present). []{#Page_84 type="pagebreak" title="84"}Such
decisions are made by other members of the formation in the form of
positive or negative feedback (or none at all), and they are made with
recourse to the interpretive framework that has been developed in
common. These communal and continuous acts of learning, practicing, and
orientation -- the exchange, that is, between "novices" and "experts" on
the same field, be it concerned with internet politics, illegal street
racing, extreme right-wing music, body modification, or a free
encyclopedia -- serve to maintain the framework of shared meaning,
expand the constituted field, recruit new members, and adapt the
framework of interpretation and activity to changing conditions. Such
communal formations constitute themselves; they preserve and modify
themselves by constantly working out the foundations of their
constitution. This may sound circular, for the process of reflexive
self-constitution -- "autopoiesis" in the language of systems theory --
is circular in the sense that control is maintained through continuous,
self-generating feedback. Self-referentiality is a structural feature of
these formations.
:::

::: {.section}
### Singularity and communality {#c2-sec-0011}

The new communal formations are informal forms of organ­ization that are
based on voluntary action. No one is born into them, and no one
possesses the authority to force anyone else to join or remain against
his or her will, or to assign anyone with tasks that he or she might be
unwilling to do. Such a formation is not an enclosed disciplinary
institution in Foucault\'s sense,[^47^](#c2-note-0047){#c2-note-0047a}
and, within it, power is not exercised through commands, as in the
classical sense formulated by Max
Weber.[^48^](#c2-note-0048){#c2-note-0048a} The condition of not being
locked up and not being subordinated can, at least at first, represent
for the individual a gain in freedom. Under a given set of conditions,
everyone can (and must) choose which formations to participate in, and
he or she, in doing so, will have a better or worse chance to influence
the communal field of reference.

On the everyday level of communicative self-constitution and creating a
personal cognitive horizon -- in innumerable streams, updates, and
timelines on social mass media -- the most important resource is the
attention of others; that is, their feedback and the mutual recognition
that results from it. []{#Page_85 type="pagebreak" title="85"}And this
recognition may simply be in the form of a quickly clicked "like," which
is the smallest unit that can assure the sender that, somewhere out
there, there is a receiver. Without the latter, communication has no
meaning. The situation is somewhat menacing if no one clicks the "like"
button beneath a post or a photo. It is a sign that communication has
broken, and the result is the dissolution of one\'s own communicatively
constituted social existence. In this context, the boundaries are
blurred between the categories of information, communication, and
activity. Making information available always involves the active --
that is, communicating -- person, and not only in the case of ubiquitous
selfies, for in an overwhelming and chaotic environment, as discussed
above, selection itself is of such central importance that the
differences between the selected and the selecting become fluid,
particularly when the goal of the latter is to experience confirmation
from others. In this back-and-forth between one\'s own presence and the
validation of others, one\'s own motives and those of the community are
not in opposition but rather mutually depend on one another. Condensed
to simple norms and to a basic set of guidelines within the context of
an image-oriented social mass media service, the rule (or better:
friendly tip) that one need not but probably ought to follow is this:

::: {.extract}
Be an active member of the Instagram community to receive likes and
comments. Take time to comment on a friend\'s photo, or to like photos.
If you do this, others will reciprocate. If you never acknowledge your
followers\' photos, then they won\'t acknowledge
you.[^49^](#c2-note-0049){#c2-note-0049a}
:::

The context of this widespread and highly conventional piece of advice
is not, for instance, a professional marketing campaign; it is simply
about personally positioning oneself within a social network. The goal
is to establish one\'s own, singular, identity. The process required to
do so is not primarily inward-oriented; it is not based on questions
such as: "Who am I really, apart from external influences?" It is rather
outward-oriented. It takes place through making connections with others
and is concerned with questions such as: "Who is in my network, and what
is my position within it?" It is []{#Page_86 type="pagebreak"
title="86"}revealing that none of the tips in the collection cited above
offers advice about achieving success within a community of
photographers; there are not suggestions, for instance, about how to
take high-quality photographs. With smart cameras and built-in filters
for post-production, this is not especially challenging any more,
especially because individual pictures, to be examined closely and on
their own terms, have become less important gauges of value than streams
of images that are meant to be quickly scrolled through. Moreover, the
function of the critic, who once monopolized the right to interpret and
evaluate an image for everyone, is no longer of much significance.
Instead, the quality of a picture is primarily judged according to
whether "others like it"; that is, according to its performance in the
ongoing popularity contest within a specific niche. But users do not
rely on communal formations and the feedback they provide just for the
sharing and evaluation of pictures. Rather, this dynamic has come to
determine more and more facets of life. Users experience the
constitution of singularity and communality, in which a person can be
perceived as such, as simultaneous and reciprocal processes. A million
times over and nearly subconsciously (because it is so commonplace),
they engage in a relationship between the individual and others that no
longer really corresponds to the liberal opposition between
individuality and society, between personal and group identity. Instead
of viewing themselves as exclusive entities (either in terms of the
emphatic affirmation of individuality or its dissolution within a
homogeneous group), the new formations require that the production of
difference and commonality takes place
simultaneously.[^50^](#c2-note-0050){#c2-note-0050a}
:::

::: {.section}
### Authenticity and subjectivity {#c2-sec-0012}

Because members have decided to participate voluntarily in the
community, their expressions and actions are regarded as authentic, for
it is implicitly assumed that, in making these gestures, they are not
following anyone else\'s instructions but rather their own motivations.
The individual does not act as a representative or functionary of an
organization but rather as a private and singular (that is, unique)
person. While at a gathering of the Occupy movement, a sure way to be
kicked out to is to stick stubbornly to a party line, even if this way
[]{#Page_87 type="pagebreak" title="87"}of thinking happens to agree
with that of the movement. Not only at Occupy gatherings, however, but
in all new communal formations it is expected that everyone there is
representing his or her own interests. As most people are aware, this
assumption is theoretically naïve and often proves to be false in
practice. Even spontaneity can be calculated, and in many cases it is.
Nevertheless, the expectation of authenticity is relevant because it
creates a minimum of trust. As the basis of social trust, such
contra-factual expectations exist elsewhere as well. Critical readers of
newspapers, for instance, must assume that what they are reading has
been well researched and is presented as objectively as possible, even
though they know that objectivity is theoretically a highly problematic
concept -- to this extent, postmodern theory has become common knowledge
-- and that newspapers often pursue (hidden) interests or lead
campaigns. Yet without such contra-factual assumptions, the respective
orders of knowledge and communication would not function, for they
provide the normative framework within which deviations can be
perceived, criticized, and sanctioned.

In a seemingly traditional manner, the "authentic self" is formulated
with reference to one\'s inner world, for instance to personal
knowledge, interests, or desires. As the core of personality, however,
this inner world no longer represents an immutable and essential
characteristic but rather a temporary position. Today, even someone\'s
radical reinvention can be regarded as authentic. This is the central
difference from the classical, bourgeois conception of the subject. The
self is no longer understood in essentialist terms but rather
performatively. Accordingly, the main demand on the individual who
voluntarily opts to participate in a communal formation is no longer to
be self-aware but rather to be
self-motivated.[^51^](#c2-note-0051){#c2-note-0051a} Nor is it necessary
any more for one\'s core self to be coherent. It is not a contradiction
to appear in various communal formations, each different from the next,
as a different "I myself," for every formation is comprehensive, in that
it appeals to the whole person, and simultaneously partial, in that it
is oriented toward a particular goal and not toward all areas of life.
As in the case of re-mixes and other referential processes, the concern
here is not to preserve authenticity but rather to create it in the
moment. The success or failure []{#Page_88 type="pagebreak"
title="88"}of these efforts is determined by the continuous feedback of
others -- one like after another.

These practices have led to a modified form of subject constitution for
which some sociologists, engaged in empir­ical research, have introduced
the term "networked individualism."[^52^](#c2-note-0052){#c2-note-0052a}
The idea is based on the observation that people in Western societies
(the case studies were mostly in North America) are defining their
identity less and less by their family, profession, or other stable
collective, but rather increasingly in terms of their personal social
networks; that is, according to the communal formations in which they
are active as individuals and in which they are perceived as singular
people. In this regard, individualization and atomization no longer
necessarily go hand in hand. On the contrary, the intertwined nature of
personal identity and communality can be experienced on an everyday
level, given that both are continuously created, adapted, and affirmed
by means of personal communication. This makes the networks in question
simultaneously fragile and stable. Fragile because they require the
ongoing presence of every individual and because communication can break
down quickly. Stable because the networks of relationships that can
support a single person -- as regards the number of those included,
their geograph­ical distribution, and the duration of their cohesion --
have expanded enormously by means of digital communication technologies.

Here the issue is not that of close friendships, whose number remains
relatively constant for most people and over long periods of
time,[^53^](#c2-note-0053){#c2-note-0053a} but rather so-called "weak
ties"; that is, more or less loose acquaintances that can be tapped for
new information and resources that do not exist within one\'s close
circle of friends.[^54^](#c2-note-0054){#c2-note-0054a} The more they
are expanded, the more sustainable and valuable these networks become,
for they bring together a large number of people and thus multiply the
material and organizational resources that are (potentially) accessible
to the individual. It is impossible to make a sweeping statement as to
whether these formations actually represent communities in a
comprehensive sense and how stable they really are, especially in times
of crisis, for this is something that can only be found out on a
case-by-case basis. It is relevant that the development of personal
networks []{#Page_89 type="pagebreak" title="89"}has not taken place in
a vacuum. The disintegration of institutions that were formerly
influential in the formation of identity and meaning began long before
the large-scale spread of networks. For most people, there is no other
choice but to attempt to orient and organize oneself, regardless of how
provisional or uncertain this may be. Or, as Manuel Castells somewhat
melodramatically put it, "At the turn of the millennium, the king and
the queen, the state and civil society, are both naked, and their
children-citizens are wandering around a variety of foster
homes."[^55^](#c2-note-0055){#c2-note-0055a}
:::

::: {.section}
### Space and time as a communal practice {#c2-sec-0013}

Although participation in a communal formation is voluntary, it is not
unselfish. Quite the contrary: an important motivation is to gain access
to a formation\'s constitutive field of practice and to the resources
associated with it. A communal formation ultimately does more than
simply steer the attention of its members toward one another. Through
the common production of culture, it also structures how the members
perceive the world and how they are able to design themselves and their
potential actions in it. It is thus a co­operative mechanism of
filtering, interpretation, and constitution. Through the everyday
referential work of its members, the community selects a manageable
amount of information from the excess of potentially available
information and brings it into a meaningful context, whereby it
validates the selection itself and orients the activity of each of its
members.

The new communal formations consist of self-referential worlds whose
constructive common practice affects the foundations of social activity
itself -- the constitution of space and time. How? The spatio-temporal
horizon of digital communication is a global (that is, placeless) and
ongoing present. The technical vision of digital communication is always
the here and now. With the instant transmission of information,
everything that is not "here" is inaccessible and everything that is not
"now" has disappeared. Powerful infrastructure has been built to achieve
these effects: data centers, intercontinental networks of cables,
satellites, high-performance nodes, and much more. Through globalized
high-frequency trading, actors in the financial markets have realized
this []{#Page_90 type="pagebreak" title="90"}technical vision to its
broadest extent by creating a never-ending global present whose expanse
is confined to milliseconds. This process is far from coming to an end,
for massive amounts of investment are allocated to accomplish even the
smallest steps toward this goal. On November 3, 2015, a 4,600-kilometer,
300-million-dollar transatlantic telecommunications cable (Hibernia
Express) was put into operation between London and New York -- the first
in more than 10 years -- with the single goal of accelerating automated
trading between the two places by 5.2 milliseconds.

For social and biological processes, this technical horizon of space and
time is neither achievable nor desirable. Such processes, on the
contrary, are existentially dependent on other spatial and temporal
orders. Yet because of the existence of this non-geographical and
atemporal horizon, the need -- as well as the possibility -- has arisen
to redefine the parameters of space and time themselves in order to
counteract the mire of technically defined spacelessness and
timelessness. If space and time are not simply to vanish in this
spaceless, ongoing present, how then should they be defined? Communal
formations create spaces for action not least by determining their own
geographies and temporal rhythms. They negotiate what is near and far
and also which places are disregarded (that is, not even perceived). If
every place is communicatively (and physically) reachable, every person
must decide which place he or she would like to reach in practice. This,
however, is not an individual decision but rather a task that can only
be approached collectively. Those places which are important and thus
near are determined by communal formations. This takes place in the form
of a rough consensus through the blogs that "one" has to read, the
exhibits that "one" has to see, the events and conferences that "one"
has to attend, the places that "one" has to visit before they are
overrun by tourists, the crises in which "the West" has to intervene,
the targets that "lend themselves" to a terrorist attack, and so on. On
its own, however, selection is not enough. Communal formations are
especially powerful when they generate the material and organizational
resources that are necessary for their members to implement their shared
worldview through actions -- to visit, for instance, the places that
have been chosen as important. This can happen if they enable access
[]{#Page_91 type="pagebreak" title="91"}to stipends, donations, price
reductions, ride shares, places to stay, tips, links, insider knowledge,
public funds, airlifts, explosives, and so on. It is in this way that
each formation creates its respective spatial constructs, which define
distances in a great variety of ways. At the same time that war-torn
Syria is unreachably distant even for seasoned reporters and their
staff, veritable travel agencies are being set up in order to bring
Western jihadists there in large numbers.

Things are similar for the temporal dimensions of social and biological
processes. Permanent presence is a temporality that is inimical to life
but, under its influence, temporal rhythms have to be redefined as well.
What counts as fast? What counts as slow? In what order should things
proceed? On the everyday level, for instance, the matter can be as
simple as how quickly to respond to an email. Because the transmission
of information hardly takes any time, every delay is a purely social
creation. But how much is acceptable? There can be no uniform answer to
this. The members of each communal formation have to negotiate their own
rules with one another, even in areas of life that are otherwise highly
formalized. In an interview with the magazine *Zeit*, for instance, a
lawyer with expertise in labor law was asked whether a boss may require
employees to be reachable at all times. Instead of answering by
referring to any binding legal standards, the lawyer casually advised
that this was a matter of flexible negotiation: "Express your misgivings
openly and honestly about having to be reachable after hours and,
together with your boss, come up with an agreeable rule to
follow."[^56^](#c2-note-0056){#c2-note-0056a} If only it were that easy.

Temporalities that, in many areas, were once simply taken for granted by
everyone on account of the factuality of things now have to be
culturally determined -- that is, explicitly negotiated -- in a greater
number of contexts. Under the conditions of capitalism, which is always
creating new competitions and incentives, one consequence is the
often-lamented "acceleration of time." We are asked to produce, consume,
or accomplish more and more in less and less
time.[^57^](#c2-note-0057){#c2-note-0057a} This change in the
structuring of time is not limited to linear acceleration. It reaches
deep into the foundations of life and has even reconfigured biological
processes themselves. Today there is an entire industry that specializes
in freezing the stem []{#Page_92 type="pagebreak" title="92"}cells of
newborns in liquid nitrogen -- that is, in suspending cellular
biological time -- in case they might be needed later on in life for a
transplant or for the creation of artificial organs. Children can be
born even if their physical mothers are already dead. Or they can be
"produced" from ova that have been stored for many years at minus 196
degrees.[^58^](#c2-note-0058){#c2-note-0058a} At the same time,
questions now have to be addressed every day whose grand temporal
dimensions were once the matter of myth. In the case of atomic energy,
for instance, there is the issue of permanent disposal. Where can we
deposit nuclear waste for the next hundred thousand years without it
causing catastrophic damage? How can the radioactive material even be
transported there, wherever that is, within the framework of everday
traffic laws?[^59^](#c2-note-0059){#c2-note-0059a}

The construction of temporal dimensions and sequences has thus become an
everyday cultural question. Whereas throughout Europe, for example,
committees of experts and ethicists still meet to discuss reproductive
medicine and offer their various recommendations, many couples are
concerned with the specific question of whether or how they can fulfill
their wish to have children. Without a coherent set of rules, questions
such as these have to be answered by each individual with recourse to
his or her personally relevant communal formation. If there is no
cultural framework that at least claims to be binding for everyone, then
the individual must negotiate independently within each communal
formation with the goal of acquiring the resources necessary to act
according to communal values and objectives.
:::

::: {.section}
### Self-generating orders {#c2-sec-0014}

These three functions -- selection, interpretation, and the constitutive
ability to act -- make communal formations the true subject of the
digital condition. In principle, these functions are nothing new;
rather, they are typical of fields that are organized without reference
to external or irrefutable authorities. The state of scholarship, for
instance, is determined by what is circulated in refereed publications.
In this case, "refereed" means that scientists at the same professional
rank mutually evaluate each other\'s work. The scientific community (or
better: the sub-community of a specialized discourse) []{#Page_93
type="pagebreak" title="93"}evaluates the contributions of individual
scholars. They decide what should be considered valuable, and this
consensus can theoretically be revised at any time. It is based on a
particular catalog of criteria, on an interpretive framework that
provides lines of inquiry, methods, appraisals, and conventions of
presentation. With every article, this framework is confirmed and
reconstituted. If the framework changes, this can lead in the most
extreme case to a paradigm shift, which overturns fundamental
orientations, assumptions, and
certainties.[^60^](#c2-note-0060){#c2-note-0060a} The result of this is
not only a change in how scientific contributions are evaluated but also
a change in how the external world is perceived and what activities are
possible in it. Precisely because the sciences claim to define
themselves, they have the ability to revise their own foundations.

The sciences were the first large sphere of society to achieve
comprehensive cultural autonomy; that is, the ability to determine its
own binding meaning. Art was the second that began to organize itself on
the basis of internal feedback. It was during the era of Romanticism
that artists first laid claim to autonomy. They demanded "to absolve art
from all conditions, to represent it as a realm -- indeed as the only
realm -- in which truth and beauty are expressed in their pure form, a
realm in which everything truly human is
transcended."[^61^](#c2-note-0061){#c2-note-0061a} With the spread of
photography in the second half of the nineteenth century, art also
liberated itself from its final task, which was hoisted upon it from the
outside, namely the need to represent external reality. Instead of
having to represent the external world, artists could now focus on their
own subjectivity. This gave rise to a radical individualism, which found
its clearest summation in Marcel Duchamp\'s assertion that only the
artist could determine what is art. This he claimed in 1917 by way of
explaining how an industrially produced urinal, exhibited as a signed
piece with the title "Fountain," could be considered a work of art.

With the rise of the knowledge economy and the expansion of cultural
fields, including the field of art and the artists active within it,
this individualism quickly swelled to unmanageable levels. As a
consequence, the task of defining what should be regarded as art shifted
from the individual artist to the curator. It now fell upon the latter
to select a few works from the surplus of competing scenes and thus
bring temporary []{#Page_94 type="pagebreak" title="94"}order to the
constantly diversifying and changing world of contemporary art. This
order was then given expression in the form of exhibits, which were
intended to be more than the sum of their parts. The beginning of this
practice can be traced to the 1969 exhibition When Attitudes Become
Form, which was curated by Harald Szeemann for the Kunsthalle Bern (it
was also sponsored by Philip Morris). The works were not neatly
separated from one another and presented without reference to their
environment, but were connected with each other both spatially and in
terms of their content. The effect of the exhibition could be felt at
least as much through the collection of works as a whole as it could
through the individual pieces, many of which had been specially
commissioned for the exhibition itself. It not only cemented Szeemann\'s
reputation as one of the most significant curators of the twentieth
century; it also completely redefined the function of the curator as a
central figure within the art system.

This was more than 40 years ago and in a system that functioned
differently from that of today. The distance from this exhibition, but
also its ongoing relevance, was negotiated, significantly, in a
re-enactment at the 2013 Biennale in Venice. For this, the old rooms at
the Kunsthalle Bern were reconstructed in the space of the Fondazione
Prada in such a way that both could be seen simultaneously. As is
typical with such re-enactments, the curators of the project described
its goals in terms of appropriation and distancing: "This was the
challenge: how could we find and communicate a limit to a non-limit,
creating a place that would reflect exactly the architectural structures
of the Kunsthalle, but also an asymmetrical space with respect to our
time and imbued with an energy and tension equivalent to that felt at
Bern?"[^62^](#c2-note-0062){#c2-note-0062a}

Curation -- that is, selecting works and associating them with one
another -- has become an omnipresent practice in the art system. No
exhibition takes place any more without a curator. Nevertheless,
curators have lost their extraordinary
position,[^63^](#c2-note-0063){#c2-note-0063a} with artists taking on
more of this work themselves, not only because the boundaries between
artistic and curatorial activities have become fluid but also because
many artists explicitly co-produce the context of their work by
incorporating a multitude of references into their pieces. It is with
precisely this in mind that André Rottmann, in the []{#Page_95
type="pagebreak" title="95"}quotation cited at the beginning of this
chapter, can assert that referentiality has become the dominant
production-aesthetic model in contemporary art. This practice enables
artists to objectify themselves by explicitly placing themselves into a
historical and social context. At the same time, it also enables them to
subjectify the historical and social context by taking the liberty to
select and arrange the references
themselves.[^64^](#c2-note-0064){#c2-note-0064a}

Such strategies are no longer specific to art. Self-generated spaces of
reference and agency are now deeply embedded in everyday life. The
reason for this is that a growing number of questions can no longer be
answered in a generally binding way (such as those about what
constitutes fine art), while the enormous expansion of the cultural
requires explicit decisions to be made in more aspects of life. The
reaction to this dilemma has been radical subjectivation. This has not,
however, been taking place at the level of the individual but rather at
that of communal formations. There is now a patchwork of answers to
large questions and a multitude of reactions to large challenges, all of
which are limited in terms of their reliability and scope.
:::

::: {.section}
### Ambivalent voluntariness {#c2-sec-0015}

Even though participation in new formations is voluntary and serves the
interests of their members, it is not without preconditions. The most
important of these is acceptance, the willing adoption of the
interpretive framework that is generated by the communal formation. The
latter is formed from the social, cultural, legal, and technical
protocols that lend to each of these formations its concrete
constitution and specific character. Protocols are common sets of rules;
they establish, according to the network theorist Alexander Galloway,
"the essential points necessary to enact an agreed-upon standard of
action." They provide, he goes on, "etiquette for autonomous
agents."[^65^](#c2-note-0065){#c2-note-0065a} Protocols are
simul­taneously voluntary and binding; they allow actors to meet
eye-to-eye instead of entering into hierarchical relations with one
another. If everyone voluntarily complies with the protocols, then it is
not necessary for one actor to give instructions to another. Whoever
accepts the relevant protocols can interact with others who do the same;
whoever opts not to []{#Page_96 type="pagebreak" title="96"}accept them
will remain on the outside. Protocols establish, for example, common
languages, technical standards, or social conventions. The fundamental
protocol for the internet is the Transmission Control Protocol/Internet
Protocol (TCP/IP). This suite of protocols defines the common language
for exchanging data. Every device that exchanges information over the
internet -- be it a smartphone, a supercomputer in a data center, or a
networked thermostat -- has to use these protocols. In growing areas of
social contexts, the common language is English. Whoever wishes to
belong has to speak it increasingly often. In the natural sciences,
communication now takes place almost exclusively in English. Non-native
speakers who accept this norm may pay a high price: they have to learn a
new language and continually improve their command of it or else resign
themselves to being unable to articulate things as they would like --
not to mention losing the possibility of expressing something for which
another language would perhaps be more suitable, or forfeiting
trad­itions that cannot be expressed in English. But those who refuse to
go along with these norms pay an even higher price, risking
self-marginalization. Those who "voluntarily" accept conventions gain
access to a field of practice, even though within this field they may be
structurally disadvantaged. But unwillingness to accept such
conventions, with subsequent denial of access to this field, might have
even greater disadvantages.[^66^](#c2-note-0066){#c2-note-0066a}

In everyday life, the factors involved with this trade-off are often
presented in the form of subtle cultural codes. For instance, in order
to participate in a project devoted to the development of free software,
it is not enough for someone to possess the necessary technical
knowledge; he or she must also be able to fit into a wide-ranging
informal culture with a characteristic style of expression, humor, and
preferences. Ultimately, software developers do not form a professional
corps in the traditional sense -- in which functionaries meet one
another in the narrow and regulated domain of their profession -- but
rather a communal formation in which the engagement of the whole person,
both one\'s professional and social self, is scrutinized. The
abolishment of the separ­ation between different spheres of life,
requiring interaction of a more holistic nature, is in fact a key
attraction of []{#Page_97 type="pagebreak" title="97"}these communal
formations and is experienced by some as a genuine gain in freedom. In
this situation, one is no longer subjected to rules imposed from above
but rather one is allowed to -- and indeed ought to -- be authentically
pursuing his or her own interests.

But for others the experience can be quite the opposite because the
informality of the communal formation also allows forms of exclusion and
discrimination that are no longer acceptable in formally organized
realms of society. Discrimination is more difficult to identify when it
takes place within the framework of voluntary togetherness, for no one
is forced to participate. If you feel uncomfortable or unwelcome, you
are free to leave at any time. But this is a specious argument. The
areas of free software or Wikipedia are difficult places for women. In
these clubby atmospheres of informality, they are often faced with
blatant sexism, and this is one of the reasons why many women choose to
stay away from such projects.[^67^](#c2-note-0067){#c2-note-0067a} In
2007, according to estimates by the American National Center for Women &
Information Technology, whereas approximately 27 percent of all jobs
related to computer science were held by women, their representation at
the same time was far lower in the field of free software -- on average
less than 2 percent. And for years, the proportion of women who edit
texts on Wikipedia has hovered at around 10
percent.[^68^](#c2-note-0068){#c2-note-0068a}

The consequences of such widespread, informal, and elusive
discrimination are not limited to the fact that certain values and
prejudices of the shared culture are included in these products, while
different viewpoints and areas of knowledge are
excluded.[^69^](#c2-note-0069){#c2-note-0069a} What is more, those who
are excluded or do not wish to expose themselves to discrimination (and
thus do not even bother to participate in any communal formations) do
not receive access to the resources that circulate there (attention and
support, valuable and timely knowledge, or job offers). Many people are
thus faced with the choice of either enduring the discrimination within
a community or remaining on the outside and thus invisible. That this
decision is made on a voluntary basis and on one\'s own responsibility
hardly mitigates the coercive nature of the situation. There may be a
choice, but it would be misleading to call it a free one.[]{#Page_98
type="pagebreak" title="98"}
:::

::: {.section}
### The power of sociability {#c2-sec-0016}

In order to explain the peculiar coercive nature of the (nom­inally)
voluntary acceptance of protocols, rules, and norms, the political
scientist David Singh Grewal, drawing on the work of Max Weber and
Michel Foucault, has distinguished between the "power of sovereignty"
and the "power of sociabil­ity."[^70^](#c2-note-0070){#c2-note-0070a}
The former develops on the basis of dominance and subordination, as
imposed by authorities, police officers, judges, or other figures within
formal hierarchies. Their power is anchored in disciplinary
institutions, and the dictum of this sort of power is: "You must!" The
power of sociability, on the contrary, functions by prescribing the
conditions or protocols under which people are able to enter into an
exchange with one another. The dictum of this sort of power is: "You
can!" The more people accept certain protocols and standards, the more
powerful these become. Accordingly, the sociability that they structure
also becomes more comprehensive, and those not yet involved have to ask
themselves all the more urgently whether they can afford not to accept
these protocols and standards. Whereas the first type of power is
ultimately based on the monopoly of violence and on repression, the
second is founded on voluntary submission. When the entire internet
speaks TCP/IP, then an individual\'s decision to use it may be voluntary
in nominal terms, but at the same time it is an indispensable
precondition for existing within the network at all. Protocols exert
power without there having to be anyone present to possess the power in
question. Whereas the sovereign can be located, the effects of
sociability\'s power are diffuse and omnipresent. They are not
repressive but rather constitutive. No one forces a scientist to publish
in English or a woman editor to tolerate disparaging remarks on
Wikipedia. People accept these often implicit behavioral norms (sexist
comments are permitted, for instance) out of their own interests in
order to acquire access to the resources circulating within the networks
and to constitute themselves within it. In this regard, Singh
distinguishes between the "intrinsic" and "extrinsic" reasons for
abiding by certain protocols.[^71^](#c2-note-0071){#c2-note-0071a} In
the first case, the motivation is based on a new protocol being better
suited than existing protocols for carrying out []{#Page_99
type="pagebreak" title="99"}a specific objective. People thus submit
themselves to certain rules because they are especially efficient,
transparent, or easy to use. In the second case, a protocol is accepted
not because but in spite of its features. It is simply a precondition
for gaining access to a space of agency in which resources and
opportunities are available that cannot be found anywhere else. In the
first case, it is possible to speak subjectively of voluntariness,
whereas the second involves some experience of impersonal compunction.
One is forced to do something that might potentially entail grave
disadvantages in order to have access, at least, to another level of
opportunities or to create other advantages for oneself.
:::

::: {.section}
### Homogeneity, difference and authority {#c2-sec-0017}

Protocols are present on more than a technical level; as interpretive
frameworks, they structure viewpoints, rules, and patterns of behavior
on all levels. Thus, they provide a degree of cultural homogeneity, a
set of commonalities that lend these new formations their communal
nature. Viewed from the outside, these formations therefore seem
inclined toward consensus and uniformity, for their members have already
accepted and internalized certain aspects in common -- the protocols
that enable exchange itself -- whereas everyone on the outside has not
done so. When everyone is speaking in English, the conversation sounds
quite monotonous to someone who does not speak the language.

Viewed from the inside, the experience is something different: in order
to constitute oneself within a communal formation, not only does one
have to accept its rules voluntarily and in a self-motivated manner; one
also has to make contributions to the reproduction and development of
the field. Everyone is urged to contribute something; that is, to
produce, on the basis of commonalities, differences that simultaneously
affirm, modify, and enhance these commonalities. This leads to a
pronounced and occasionally highly competitive internal differentiation
that can only be understood, however, by someone who has accepted the
commonalities. To an outsider, this differentiation will seem
irrelevant. Whoever is not well versed in the universe of *Star Wars*
will not understand why the various character interpretations at
[]{#Page_100 type="pagebreak" title="100"}cosplay conventions, which I
discussed above, might be brilliant or even controversial. To such a
person, they will all seem equally boring and superficial.

These formations structure themselves internally through the production
of differences; that is, by constantly changing their common ground.
Those who are able to add many novel aspects to the common resources
gain a degree of authority. They assume central positions and they
influence, through their behavior, the development of the field more
than others do. However, their authority, influence, and de facto power
are not based on any means of coercion. As Niklas Luhmann noted, "In the
end, one participant\'s achievements in making selections \[...\] are
accepted by another participant \[...\] as a limitation of the latter\'s
potential experiences and activities without him having to make the
selection on his own."[^72^](#c2-note-0072){#c2-note-0072a} Even this is
a voluntary and self-interested act: the members of the formation
recognize that this person has contributed more to the common field and
to the resources within it. This, in turn, is to everyone\'s advantage,
for each member would ultimately like to make use of the field\'s
resources to achieve his or her own goals. This arrangement, which can
certainly take on hierarchical qualities, is experienced as something
meritocratically legitimized and voluntarily
accepted.[^73^](#c2-note-0073){#c2-note-0073a} In the context of free
software, there has therefore been some discussion of "benevolent
dictators."[^74^](#c2-note-0074){#c2-note-0074a} The matter of
"dictators" is raised because projects are often led by charismatic
figures without a formal mandate. They are "benevolent" because their
pos­ition of authority is based on the fact that a critical mass of
participating producers has voluntarily subordinated itself for its own
self-interest. If the consensus breaks over whose contributions have
been carrying the most weight, then the formation will be at risk of
losing its internal structure and splitting apart ("forking," in the
jargon of free software).
:::
:::

::: {.section}
Algorithmicity {#c2-sec-0018}
--------------

Through personal communication, referential processes in communal
formations create cultural zones of various sizes and scopes. They
expand into the empty spaces that have been created by the erosion of
established institutions and []{#Page_101 type="pagebreak"
title="101"}processes, and once these new processes have been
established the process of erosion intensifies. Multiple processes of
exchange take place alongside one another, creating a patchwork of
interconnected, competing, or entirely unrelated spheres of meaning,
each with specific goals and resources and its own preconditions and
potentials. The structures of knowledge, order, and activity that are
generated by this are holistic as well as partial and limited. The
participants in such structures are simultaneously addressed on many
levels that were once functionally separated; previously independent
spheres, such as work and leisure, are now mixed together, but usually
only with respect to the subdivisions of one\'s own life. And, at first,
the structures established in this way are binding only for active
participants.

::: {.section}
### Exiting the "Library of Babel" {#c2-sec-0019}

For one person alone, however, these new processes would not be able to
generate more than a local island of meaning from the enormous clamor of
chaotic spheres of information. In his 1941 story "The Library of
Babel," Jorge Luis Borges fashioned a fitting image for such a
situation. He depicts the world as a library of unfathomable and
possibly infinite magnitude. The characters in the story do not know
whether there is a world outside of the library. There are reasons to
believe that there is, and reasons that suggest otherwise. The library
houses the complete collection of all possible books that can be written
on exactly 410 pages. Contained in these volumes is the promise that
there is "no personal or universal problem whose eloquent solution
\[does\] not exist," for every possible combination of letters, and thus
also every possible pronouncement, is recorded in one book or another.
No catalog has yet been found for the library (though it must exist
somewhere), and it is impossible to identify any order in its
arrangement of books. The "men of the library," according to Borges,
wander round in search of the one book that explains everything, but
their actual discoveries are far more modest. Only once in a while are
books found that contain more than haphazard combinations of signs. Even
small regularities within excerpts of texts are heralded as sensational
discoveries, and it is around these discoveries that competing
[]{#Page_102 type="pagebreak" title="102"}schools of interpretation
develop. Despite much labor and effort, however, the knowledge gained is
minimal and fragmentary, so the prevailing attitude in the library is
bleak. By the time of the narrator\'s generation, "nobody expects to
discover anything."[^75^](#c2-note-0075){#c2-note-0075a}

Although this vision has now been achieved from a quantitative
perspective -- no one can survey the "library" of digital information,
which in practical terms is infinitely large, and all of the growth
curves continue to climb steeply -- today\'s cultural reality is
nevertheless entirely different from that described by Borges. Our
ability to deal with massive amounts of data has radically improved, and
thus our faith in the utility of information is not only unbroken but
rather gaining strength. What is new is precisely such large quantities
of data ("big data"), which, as we are promised or forewarned, will lead
to new knowledge, to a comprehensive understanding of the world, indeed
even to "omniscience."[^76^](#c2-note-0076){#c2-note-0076a} This faith
in data is based above all on the fact that the two processes described
above -- referentiality and communality -- are not the only new
mechanisms for filtering, sorting, aggregating, and evaluating things.
Beneath or ahead of the social mechanisms of decentralized and networked
cultural production, there are algorithmic processes that pre-sort the
immeasurably large volumes of data and convert them into a format that
can be apprehended by individuals, evaluated by communities, and
invested with meaning.

Strictly speaking, it is impossible to maintain a categorical
distinction between social processes that take place in and by means of
technological infrastructures and technical pro­cesses that are socially
constructed. In both cases, social actors attempt to realize their own
interests with the resources at their disposal. The methods of
(attempted) realization, the available resources, and the formulation of
interests mutually influence one another. The technological resources
are inscribed in the formulation of goals. These open up fields of
imagination and desire, which in turn inspire technical
development.[^77^](#c2-note-0077){#c2-note-0077a} Although it is
impossible to draw clear theoretical lines, the attempt to make such a
distinction can nevertheless be productive in practice, for in this way
it is possible to gain different perspectives about the same object of
investigation.[]{#Page_103 type="pagebreak" title="103"}
:::

::: {.section}
### The rise of algorithms {#c2-sec-0020}

An algorithm is a set of instructions for converting a given input into
a desired output by means of a finite number of steps: algorithms are
used to solve predefined problems. For a set of instructions to become
an algorithm, it has to be determined in three different respects.
First, the necessary steps -- individually and as a whole -- have to be
described unambiguously and completely. To do this, it is usually
neces­sary to use a formal language, such as mathematics, or a
programming language, in order to avoid the characteristic imprecision
and ambiguity of natural language and to ensure instructions can be
followed without interpretation. Second, it must be possible in practice
to execute the individual steps together. For this reason, every
algorithm is tied to the context of its realization. If the context
changes, so do the operating processes that can be formalized as
algorithms and thus also the ways in which algorithms can partake in the
constitution of the world. Third, it must be possible to execute an
operating instruction mechanically so that, under fixed conditions, it
always produces the same result.

Defined in such general terms, it would also be possible to understand
the instruction manual for a typical piece of Ikea furniture as an
algorithm. It is a set of instructions for creating, with a finite
number of steps, a specific and predefined piece of furniture (output)
from a box full of individual components (input). The instructions are
composed in a formal language, pictograms, which define each step as
unambiguously as possible, and they can be executed by a single person
with simple tools. The process can be repeated, for the final result is
always the same: a Billy box will always yield a Billy shelf. In this
case, a person takes over the role of a machine, which (unambiguous
pictograms aside) can lead to problems, be it that scratches and other
traces on the finished piece of furniture testify to the unique nature
of the (unsuccessful) execution, or that, inspired by the micro-trend of
"Ikea hacking," the official instructions are intentionally ignored.

Because such imprecision is supposed to be avoided, the most important
domain of algorithms in practice is mathematics and its implementation
on the computer. The term []{#Page_104 type="pagebreak"
title="104"}"algorithm" derives from the Persian mathematician,
astronomer, and geographer Muḥammad ibn Mūsā al-Khwārizmī. His book *On
the Calculation with Hindu Numerals*, which was written in Baghdad in
825, was known widely in the Western Middle Ages through a Latin
translation and made the essential contribution of introducing
Indo-Arabic nu­merals and the number zero to Europe. The work begins
with the formula *dixit algorizmi* ... ("Algorismi said ..."). During
the Middle Ages, *algorizmi* or *algorithmi* soon became a general term
for advanced methods of
calculation.[^78^](#c2-note-0078){#c2-note-0078a}

The modern effort to build machines that could mechanic­ally carry out
instructions achieved its first breakthrough with Gottfried Wilhelm
Leibniz. He has often been credited with making the following remark:
"It is unworthy of excellent men to lose hours like slaves in the labour
of calculation which could be done by any peasant with the aid of a
machine."[^79^](#c2-note-0079){#c2-note-0079a} This vision already
contains a distinction between higher cognitive and interpretive
activities, which are regarded as being truly human, and lower processes
that involve pure execution and can therefore be mechanized. To this
end, Leibniz himself developed the first calculating machine, which
could carry out all four of the basic types of arithmetic. He was not
motivated to do this by the practical necessities of production and
business (although conceptually groundbreaking, Leibniz\'s calculating
machine remained, on account of its mechanical complexity, a unique item
and was never used).[^80^](#c2-note-0080){#c2-note-0080a} In the
estimation of the philosopher Sybille Krämer, calculating machines "were
rather speculative masterpieces of a century that, like none before it,
was infatuated by the idea of mechanizing 'intellectual'
processes."[^81^](#c2-note-0081){#c2-note-0081a} Long before machines
were implemented on a large scale to increase the efficiency of material
production, Leibniz had already speculated about using them to enhance
intellectual labor. And this vision has never since disappeared. Around
a century and a half later, the English polymath Charles Babbage
formulated it anew, now in direct connection with industrial
mechanization and its imperative of time-saving
efficiency.[^82^](#c2-note-0082){#c2-note-0082a} Yet he, too, failed to
overcome the problem of practically realizing such a machine.

The decisive step that turned the vision of calculating machines into
reality was made by Alan Turing in 1937. With []{#Page_105
type="pagebreak" title="105"}a theoretical model, he demonstrated that
every algorithm could be executed by a machine as long as it could read
an incremental set of signs, manipulate them according to established
rules, and then write them out again. The validity of his model did not
depend on whether the machine would be analog or digital, mechanical or
electronic, for the rules of manipulation were not at first conceived as
being a fixed component of the machine itself (that is, as being
implemented in its hardware). The electronic and digital approach came
to be preferred because it was hoped that even the instructions could be
read by the machine itself, so that the machine would be able to execute
not only one but (theoretically) every written algorithm. The
Hungarian-born mathematician John von Neumann made it his goal to
implement this idea. In 1945, he published a model in which the program
(the algorithm) and the data (the input and output) were housed in a
common storage device. Thus, both could be manipulated simultaneously
without having to change the hardware. In this way, he converted the
"Turing machine" into the "universal Turing machine"; that is, the
modern computer.[^83^](#c2-note-0083){#c2-note-0083a}

Gordon Moore, the co-founder of the chip manufacturer Intel,
prognosticated 20 years later that the complexity of integrated circuits
and thus the processing power of computer chips would double every 18 to
24 months. Since the 1970s, his prediction has been known as Moore\'s
Law and has essentially been correct. This technical development has
indeed taken place exponentially, not least because the semi-conductor
industry has been oriented around
it.[^84^](#c2-note-0084){#c2-note-0084a} An IBM 360/40 mainframe
computer, which was one of the first of its kind to be produced on a
large scale, could make approximately 40,000 calculations per second and
its cost, when it was introduced to the market in 1965, was \$1.5
million per unit. Just 40 years later, a standard server (with a
quad-core Intel processor) could make more than 40 billion calculations
per second, and this at a price of little more than \$1,500. This
amounts to an increase in performance by a factor of a million and a
corresponding price reduction by a factor of a thousand; that is, an
improvement in the price-to-performance ratio by a factor of a billion.
With inflation taken into consideration, this factor would be even
higher. No less dramatic were the increases in performance -- or rather
[]{#Page_106 type="pagebreak" title="106"}the price reductions -- in the
area of data storage. In 1980, it cost more than \$400,000 to store a
gigabyte of data, whereas 30 years later it would cost just 10 cents to
do the same -- a price reduction by a factor of 4 million. And in both
areas, this development has continued without pause.

These increases in performance have formed the material basis for the
rapidly growing number of activities carried out by means of algorithms.
We have now reached a point where Leibniz\'s distinction between
creative mental functions and "simple calculations" is becoming
increasingly fuzzy. Recent discussions about the allegedly threatening
"domination of the computer" have been kindled less by the increased use
of algorithms as such than by the gradual blurring of this distinction
with new possibilities to formalize and mechanize increasing areas of
creative thinking.[^85^](#c2-note-0085){#c2-note-0085a} Activities that
not long ago were reserved for human intelligence, such as composing
texts or analyzing the content of images, are now frequently done by
machines. As early as 2010, a program called Stats Monkey was introduced
to produce short reports about baseball games. All that the program
needs for this is comprehensive data about the games, which can be
accumulated mechanically and which have since become more detailed due
to improved image recognition and sensors. From these data, the program
extracts the decisive moments and players of a game, recognizes
characteristic patterns throughout the course of play (such as
"extending an early lead," "a dramatic comeback," etc.), and on this
basis generates its own report. Regarding the reports themselves, a
number of variables can be determined in advance, for instance whether
the story should be written from the perspective of a neutral observer
or from the standpoint of one of the two teams. If writing about little
league games, the program can be instructed to ignore the errors made by
children -- because no parent wants to read about those -- and simply
focus on their heroics. The algorithm was soon patented, and a start-up
business was created from the original interdisciplinary research
project: Narrative Science. In addition to sport reports it now offers
texts of all sorts, but above all financial reports -- another field for
which there is a great deal of available data. These texts have been
published by reputable media outlets such as the business magazine
*Forbes*, in which their authorship []{#Page_107 type="pagebreak"
title="107"}is credited to "Narrative Science." Although these
contributions are still limited to relatively simple topics, this will
not remain the case for long. When asked about the percentage of news
that would be written by computers 15 years from now, Narrative
Science\'s chief technology officer and co-founder Kristian Hammond
confidently predicted "\[m\]ore than 90 percent." He added that, within
the next five years, an algorithm could even win a Pulitzer
Prize.[^86^](#c2-note-0086){#c2-note-0086a} This may be blatant hype and
self-promotion but, as a general estimation, Hammond\'s assertion is not
entirely beyond belief. It remains to be seen whether algorithms will
replace or simply supplement traditional journalism. Yet because media
companies are now under strong financial pressure, it is certainly
reasonable to predict that many journalistic texts will be automated in
the future. Entirely different applications, however, have also been
conceived. Alexander Pschera, for instance, foresees a new age in the
relationship between humans and nature, for, as soon as animals are
equipped with transmitters and sensors and are thus able to tell their
own stories through the appropriate software, they will be regarded as
individuals and not merely as generic members of a
species.[^87^](#c2-note-0087){#c2-note-0087a}

We have not yet reached this point. However, given that the CIA has also
expressed interest in Narrative Science and has invested in it through
its venture-capital firm In-Q-Tel, there are indications that
applications are being developed beyond the field of journalism. For the
purpose of spreading propaganda, for instance, algorithms can easily be
used to create a flood of entries on online forums and social mass
media.[^88^](#c2-note-0088){#c2-note-0088a} Narrative Science is only
one of many companies offering automated text analysis and production.
As implemented by IBM and other firms, so-called E-discovery software
promises to reduce dramatically the amount of time and effort required
to analyze the constantly growing numbers of files that are relevant to
complex legal cases. Without such software, it would be impossible in
practice for lawyers to deal with so many documents. Numerous bots
(automated editing programs) are active in the production of Wikipedia
as well. Whereas, in the German edition, bots are forbidden from writing
their own articles, this is not the case in the Swedish version.
Measured by the number of entries, the latter is now the second-largest
edition of the online encyclopedia in the []{#Page_108 type="pagebreak"
title="108"}world, for, in the summer of 2013, a single bot contributed
more than 200,000 articles to it.[^89^](#c2-note-0089){#c2-note-0089a}
Since 2013, moreover, the company Epagogix has offered software that
uses histor­ical data to evaluate the market potential of film scripts.
At least one major Hollywood studio uses this software behind the backs
of scriptwriters and directors, for, according to the company\'s CEO,
the latter would be "nervous" to learn that their creative work was
being analyzed in such a way.[^90^](#c2-note-0090){#c2-note-0090a}
Think, too, of the typical statement that is made at the beginning of a
call to a telephone hotline -- "This call may be recorded for training
purposes." Increasingly, this training is not intended for the employees
of the call center but rather for algorithms. The latter are expected to
learn how to recognize the personality type of the caller and, on that
basis, to produce an appropriate script to be read by its poorly
educated and part-time human
co-workers.[^91^](#c2-note-0091){#c2-note-0091a} Another example is the
use of algorithms to grade student
essays,[^92^](#c2-note-0092){#c2-note-0092a} or ... But there is no need
to expand this list any further. Even without additional references to
comparable developments in the fields of image, sound, language, and
film analysis, it is clear by now that, on many fronts, the borders
between the creative and the mechanical have
shifted.[^93^](#c2-note-0093){#c2-note-0093a}
:::

::: {.section}
### Dynamic algorithms {#c2-sec-0021}

The algorithms used for such tasks, however, are no longer simple
sequences of static instructions. They are no longer repeated unchanged,
over and over again, but are dynamic and adaptive to a high degree. The
computing power available today is used to write programs that modify
and improve themselves semi-automatically and in response to feedback.

What this means can be illustrated by the example of evolutionary and
self-learning algorithms. An evolutionary algorithm is developed in an
iterative process that continues to run until the desired result has
been achieved. In most cases, the values of the variables of the first
generation of algorithms are chosen at random in order to diminish the
influence of the programmer\'s presuppositions on the results. These
cannot be avoided entirely, however, because the type of variables
(independent of their value) has to be determined in the first place. I
will return to this problem later on. This is []{#Page_109
type="pagebreak" title="109"}followed by a phase of evaluation: the
output of every tested algorithm is evaluated according to how close it
is to the desired solution. The best are then chosen and combined with
one another. In addition, mutations (that is, random changes) are
introduced. These steps are then repeated as often as necessary until,
according to the specifications in question, the algorithm is
"sufficient" or cannot be improved any further. By means of intensive
computational processes, algorithms are thus "cultivated"; that is,
large numbers of these are tested instead of a single one being designed
analytically and then implemented. At the heart of this pursuit is a
functional solution that proves itself experimentally and in practice,
but about which it might no longer be possible to know why it functions
or whether it actually is the best possible solution. The fundamental
methods behind this process largely derive from the 1970s (the first
stage of artificial intelligence), the difference being that today they
can be carried out far more effectively. One of the best-known examples
of an evolutionary algorithm is that of Google Flu Trends. In order to
predict which regions will be especially struck by the flu in a given
year, it evaluates the geographic distribution of internet searches for
particular terms ("cold remedies," for instance). To develop the
program, Google tested 450 million different models until one emerged
that could reliably identify local flu epidemics one to two weeks ahead
of the national health authorities.[^94^](#c2-note-0094){#c2-note-0094a}

In pursuits of this magnitude, the necessary processes can only be
administered by computer programs. The series of tests are no longer
conducted by programmers but rather by algorithms. In short, algorithms
are implemented in order to write new algorithms or determine their
variables. If this reflexive process, in turn, is built into an
algorithm, then the latter becomes "self-learning": the programmers do
not set the rules for its execution but rather the rules according to
which the algorithm is supposed to know how to accomplish a particular
goal. In many cases, the solution strategies are so complex that they
are incomprehensible in retrospect. They can no longer be tested
logically, only experimentally. Such algorithms are essentially black
boxes -- objects that can only be understood by their outer behavior but
whose internal structure cannot be known.[]{#Page_110 type="pagebreak"
title="110"}

Automatic facial recognition, as used in surveillance technologies and
for authorizing access to certain things, is based on the fact that
computers can evaluate large numbers of facial images, first to produce
a general model for a face, then to identify the variables that make a
face unique and therefore recognizable. With so-called "unsupervised" or
"deep-learning" algorithms, some developers and companies have even
taken this a step further: computers are expected to extract faces from
unstructured images -- that is, from volumes of images that contain
images both with faces and without them -- and to do so without
possessing in advance any model of the face in question. So far, the
extraction and evaluation of unknown patterns from unstructured material
has only been achieved in the case of very simple patterns -- with edges
or surfaces in images, for instance -- for it is extremely complex and
computationally intensive to program such learning processes. In recent
years, however, there have been enormous leaps in available computing
power, and both the data inputs and the complexity of the learning
models have increased exponentially. Today, on the basis of simple
patterns, algorithms are developing improved recognition of the complex
content of images. They are refining themselves on their own. The term
"deep learning" is meant to denote this very complexity. In 2012, Google
was able to demonstrate the performance capacity of its new programs in
an impressive manner: from a collection of randomly chosen YouTube
videos, analyzed in a cluster by 1,000 computers with 16,000 processors,
it was possible to create a model in just three days that increased
facial recognition in unstructured images by 70
percent.[^95^](#c2-note-0095){#c2-note-0095a} Of course, the algorithm
does not "know" what a face is, but it reliably recognizes a class of
forms that humans refer to as a face. One advantage of a model that is
not created on the basis of prescribed parameters is that it can also
identify faces in non-standard situ­ations (for instance if a person is
in the background, if a face is half-concealed, or if it has been
recorded at a sharp angle). Thanks to this technique, it is possible to
search the content of images directly and not, as before, primarily by
searching their descriptions. Such algorithms are also being used to
identify people in images and to connect them in social networks with
the profiles of the people in question, and this []{#Page_111
type="pagebreak" title="111"}without any cooperation from the users
themselves. Such algorithms are also expected to assist in directly
controlling activity in "unstructured" reality, for instance in
self-driving cars or other autonomous mobile applications that are of
great interest to the military in particular.

Algorithms of this sort can react and adjust themselves directly to
changes in the environment. This feedback, however, also shortens the
timeframe within which they are able to generate repetitive and
therefore predictable results. Thus, algorithms and their predictive
powers can themselves become unpredictable. Stock markets have
frequently experi­enced so-called "sub-second extreme events"; that is,
price fluctuations that happen in less than a
second.[^96^](#c2-note-0096){#c2-note-0096a} Dramatic "flash crashes,"
however, such as that which occurred on May 6, 2010, when the Dow Jones
Index dropped almost a thousand points in a few minutes (and was thus
perceptible to humans), have not been terribly
uncommon.[^97^](#c2-note-0097){#c2-note-0097a} With the introduction of
voice commands on mobile phones (Apple\'s Siri, for example, which came
out in 2011), programs based on self-learning algorithms have now
reached the public at large and have infiltrated increased areas of
everyday life.
:::

::: {.section}
### Sorting, ordering, extracting {#c2-sec-0022}

Orders generated by algorithms are a constitutive element of the digital
condition. On the one hand, the mechanical pre-sorting of the
(informational) world is a precondition for managing immense and
unstructured amounts of data. On the other hand, these large amounts of
data and the computing centers in which they are stored and processed
provide the material precondition for developing increasingly complex
algorithms. Necessities and possibilities are mutually motivating one
another.[^98^](#c2-note-0098){#c2-note-0098a}

Perhaps the best-known algorithms that sort the digital infosphere and
make it usable in its present form are those of search engines, above
all Google\'s PageRank. Thanks to these, we can find our way around in a
world of unstructured information and transfer increasingly larger parts
of the (informational) world into the order of unstructuredness without
giving rise to the "Library of Babel." Here, "unstructured" means that
there is no prescribed order such as (to stick []{#Page_112
type="pagebreak" title="112"}with the image of the library) a cataloging
system that assigns to each book a specific place on a shelf. Rather,
the books are spread all over the place and are dynamically arranged,
each according to a search, so that the appropriate books for each
visitor are always standing ready at the entrance. Yet the metaphor of
books being strewn all about is problematic, for "unstructuredness" does
not simply mean the absence of any structure but rather the presence of
another type of order -- a meta-structure, a potential for order -- out
of which innumerable specific arrangements can be generated on an ad hoc
basis. This meta-structure is created by algorithms. They subsequently
derive from it an actual order, which the user encounters, for instance,
when he or she scrolls through a list of hits produced by a search
engine. What the user does not see are the complex preconditions for
assembling the search results. By the middle of 2014, according to the
company\'s own information, the Google index alone included more than a
hundred million gigabytes of data.

Originally (that is, in the second half of the 1990s), Page­Rank
functioned in such a way that the algorithm analyzed the structure of
links on the World Wide Web, first by noting the number of links that
referred to a given document, and second by evaluating the "relevance"
of the site that linked to the document in question. The relevance of a
site, in turn, was determined by the number of links that led to it.
From these two variables, every document registered by the search engine
was assigned a value, the PageRank. The latter served to present the
documents found with a given search term as a hierarchical list (search
results), whereby the document with the highest value was listed
first.[^99^](#c2-note-0099){#c2-note-0099a} This algorithm was extremely
successful because it reduced the unfathomable chaos of the World Wide
Web to a task that could be managed without difficulty by an individual
user: inputting a search term and selecting from one of the presented
"hits." The simplicity of the user\'s final choice, together with the
quality of the algorithmic pre-selection, quickly pushed Google past its
competition.

Underlying this process is the assumption that every link is an
indication of relevance, and that links from frequently linked (that is,
popular) sources are more important than those from less frequently
linked (that is, unpopular) sources. []{#Page_113 type="pagebreak"
title="113"}The advantage of this assumption is that it can be
understood in terms of purely quantitative variables and it is not
necessary to have any direct understanding of a document\'s content or
of the context in which it exists.

In the middle of the 1990s, when the first version of the PageRank
algorithm was developed, the problem of judging the relevance of
documents whose content could only partially be evaluated was not a new
one. Science administrators at universities and funding agencies had
been facing this difficulty since the 1950s. During the rise of the
knowledge economy, the number of scientific publications increased
rapidly. Scientific fields, perspectives, and methods also multiplied
and diversified during this time, so that even experts could not survey
all of the work being done in their own areas of
research.[^100^](#c2-note-0100){#c2-note-0100a} Thus, instead of reading
and evaluating the content of countless new publications, they shifted
their analysis to a higher level of abstraction. They began to count how
often an article or book was cited and applied this information to
assess the value of a given author or
publication.[^101^](#c2-note-0101){#c2-note-0101a} The underlying
assumption was (and remains) that only important things are referenced,
and therefore every citation and every reference can be regarded as an
indirect vote for something\'s relevance.

In both cases -- classifying a chaotic sphere of information and
administering an expanding industry of knowledge -- the challenge is to
develop dynamic orders for rapidly changing fields, enabling the
evaluation of the importance of individual documents without knowledge
of their content. Because the analysis of citations or links operates on
a purely quantitative basis, large amounts of data can be quickly
structured with them, and especially relevant positions can be
determined. The second advantage of this approach is that it does not
require any assumptions about the contours of different fields or their
relationships to one another. This enables the organ­ization of
disordered or dynamic content. In both cases, references made by the
actors themselves are used: citations in a scientific text, links on
websites. Their value for establishing the order of a field as a whole,
however, is only visible in the aggregate, for instance in the frequency
with which a given article is
cited.[^102^](#c2-note-0102){#c2-note-0102a} In both cases, the shift
from analyzing "data" (the content of documents in the traditional
sense) to []{#Page_114 type="pagebreak" title="114"}analyzing
"meta-data" (describing documents in light of their relationships to one
another) is a precondition for being able to make any use at all of
growing amounts of information.[^103^](#c2-note-0103){#c2-note-0103a}
This shift introduced a new level of abstraction. Information is no
longer understood as a representation of external reality; its
significance is not evaluated with regard to the relation between
"information" and "the world," for instance with a qualitative criterion
such as "true"/"false." Rather, the sphere of information is treated as
a self-referential, closed world, and documents are accordingly only
evaluated in terms of their position within this world, though with
quantitative criteria such as "central"/"peripheral."

Even though the PageRank algorithm was highly effective and assisted
Google\'s rapid ascent to a market-leading position, at the beginning it
was still relatively simple and its mode of operation was at least
partially transparent. It followed the classical statistical model of an
algorithm. A document or site referred to by many links was considered
more important than one to which fewer links
referred.[^104^](#c2-note-0104){#c2-note-0104a} The algorithm analyzed
the given structural order of information and determined the position of
every document therein, and this was largely done independently of the
context of the search and without making any assumptions about it. This
approach functioned relatively well as long as the volume of information
did not exceed a certain size, and as long as the users and their
searches were somewhat similar to one another. In both respects, this is
no longer the case. The amount of information to be pre-sorted is
increasing, and users are searching in all possible situations and
places for everything under the sun. At the time Google was founded, no
one would have thought to check the internet, quickly and while on
one\'s way, for today\'s menu at the restaurant round the corner. Now,
thanks to smartphones, this is an obvious thing to do.
:::

::: {.section}
### Algorithm clouds {#c2-sec-0023}

In order to react to such changes in user behavior -- and simultaneously
to advance it further -- Google\'s search algorithm is constantly being
modified. It has become increasingly complex and has assimilated a
greater amount of contextual []{#Page_115 type="pagebreak"
title="115"}information, which influences the value of a site within
Page­Rank and thus the order of search results. The algorithm is no
longer a fixed object or unchanging recipe but is transforming into a
dynamic process, an opaque cloud composed of multiple interacting
algorithms that are continuously refined (between 500 and 600 times a
year, according to some estimates). These ongoing developments are so
extensive that, since 2003, several new versions of the algorithm cloud
have appeared each year with their own names. In 2014 alone, Google
carried out 13 large updates, more than ever
before.[^105^](#c2-note-0105){#c2-note-0105a}

These changes continue to bring about new levels of abstraction, so that
the algorithm takes into account add­itional variables such as the time
and place of a search, alongside a person\'s previously recorded
behavior -- but also his or her involvement in social environments, and
much more. Personalization and contextualization were made part of
Google\'s search algorithm in 2005. At first it was possible to choose
whether or not to use these. Since 2009, however, they have been a fixed
and binding component for everyone who conducts a search through
Google.[^106^](#c2-note-0106){#c2-note-0106a} By the middle of 2013, the
search algorithm had grown to include at least 200
variables.[^107^](#c2-note-0107){#c2-note-0107a} What is relevant is
that the algorithm no longer determines the position of a document
within a dynamic informational world that exists for everyone
externally. Instead, it now assigns a rank to their content within a
dynamic and singular universe of information that is tailored to every
individual user. For every person, an entirely different order is
created instead of just an excerpt from a previously existing order. The
world is no longer being represented; it is generated uniquely for every
user and then presented. Google is not the only company that has gone
down this path. Orders produced by algorithms have become increasingly
oriented toward creating, for each user, his or her own singular world.
Facebook, dating services, and other social mass media have been
pursuing this approach even more radically than Google.
:::

::: {.section}
### From the data shadow to the synthetic profile {#c2-sec-0024}

This form of generating the world requires not only detailed information
about the external world (that is, the reality []{#Page_116
type="pagebreak" title="116"}shared by everyone) but also information
about every individual\'s own relation to the
latter.[^108^](#c2-note-0108){#c2-note-0108a} To this end, profiles are
established for every user, and the more extensive they are, the better
they are for the algorithms. A profile created by Google, for instance,
identifies the user on three levels: as a "knowledgeable person" who is
informed about the world (this is established, for example, by recording
a person\'s searches, browsing behavior, etc.), as a "physical person"
who is located and mobile in the world (a component established, for
example, by tracking someone\'s location through a smartphone, sensors
in a smart home, or body signals), and as a "social person" who
interacts with other people (a facet that can be determined, for
instance, by following someone\'s activity on social mass
media).[^109^](#c2-note-0109){#c2-note-0109a}

Unlike the situation in the 1990s, however, these profiles are no longer
simply representations of singular people -- they are not "digital
personas" or "data shadows." They no longer represent what is
conventionally referred to as "individuality," in the sense of a
spatially and temporally uniform identity. On the one hand, profiles
rather consist of sub-individual elements -- of fragments of recorded
behavior that can be evaluated on the basis of a particular search
without promising to represent a person as a whole -- and they consist,
on the other hand, of clusters of multiple people, so that the person
being modeled can simultaneously occupy different positions in time.
This temporal differentiation enables predictions of the following sort
to be made: a person who has already done *x* will, with a probability
of *y*, go on to engage in activity *z*. It is in this way that Amazon
assembles its book recommendations, for the company knows that, within
the cluster of people that constitutes part of every person\'s profile,
a certain percentage of them have already gone through this sequence of
activity. Or, as the data-mining company Science Rockstars (!) once
pointedly expressed on its website, "Your next activity is a function of
the behavior of others and your own past."

Google and other providers of algorithmically generated orders have been
devoting increased resources to the prognostic capabilities of their
programs in order to make the confusing and potentially time-consuming
step of the search obsolete. The goal is to minimize a rift that comes
to light []{#Page_117 type="pagebreak" title="117"}in the act of
searching, namely that between the world as everyone experiences it --
plagued by uncertainty, for searching implies "not knowing something" --
and the world of algorithmically generated order, in which certainty
prevails, for everything has been well arranged in advance. Ideally,
questions should be answered before they are asked. The first attempt by
Google to eliminate this rift is called Google Now, and its slogan is
"The right information at just the right time." The program, which was
originally developed as an app but has since been made available on
Chrome, Google\'s own web browser, attempts to anticipate, on the basis
of existing data, a user\'s next step, and to provide the necessary
information before it is searched for in order that such steps take
place efficiently. Thus, for instance, it draws upon information from a
user\'s calendar in order to figure out where he or she will have to go
next. On the basis of real-time traffic data, it will then suggest the
optimal way to get there. For those driving cars, the amount of traffic
on the road will be part of the equation. This is ascertained by
analyzing the motion profiles of other drivers, which will allow the
program to determine whether the traffic is flowing or stuck in a jam.
If enough historical data is taken into account, the hope is that it
will be possible to redirect cars in such a way that traffic jams should
no longer occur.[^110^](#c2-note-0110){#c2-note-0110a} For those who use
public transport, Google Now evaluates real-time data about the
locations of various transport services. With this information, it will
suggest the optimal route and, depending on the calculated travel time,
it will send a reminder (sometimes earlier, sometimes later) when it is
time to go. That which Google is just experimenting with and testing in
a limited and unambiguous context is already part of Facebook\'s
everyday operations. With its EdgeRank algorithm, Facebook already
organizes everyone\'s newsfeed, entirely in the background and without
any explicit user interaction. On the basis of three variables -- user
affinity (previous interactions between two users), content weight (the
rate of interaction between all users and a specific piece of content),
and currency (the age of a post) -- the algorithm selects content from
the status updates made by one\'s friends to be displayed on one\'s own
page.[^111^](#c2-note-0111){#c2-note-0111a} In this way, Facebook
ensures that the stream of updates remains easy to scroll through, while
also -- it is safe []{#Page_118 type="pagebreak" title="118"}to assume
-- leaving enough room for advertising. This potential for manipulation,
which algorithms possess as they work away in the background, will be
the topic of my next section.
:::

::: {.section}
### Variables and correlations {#c2-sec-0025}

Every complex algorithm contains a multitude of variables and usually an
even greater number of ways to make connections between them. Every
variable and every relation, even if they are expressed in technical or
mathematical terms, codifies assumptions that express a specific
position in the world. There can be no purely descriptive variables,
just as there can be no such thing as "raw
data."[^112^](#c2-note-0112){#c2-note-0112a} Both -- data and variables
-- are always already "cooked"; that is, they are engendered through
cultural operations and formed within cultural
categories.[^113^](#c2-note-0113){#c2-note-0113a} With every use of
produced data and with every execution of an algorithm, the assumptions
embedded in them are activated, and the positions contained within them
have effects on the world that the algorithm generates and presents.

As already mentioned, the early version of the PageRank algorithm was
essentially based on the rather simple assumption that frequently linked
content is more relevant than content that is only seldom linked to, and
that links to sites that are themselves frequently linked to should be
given more weight than those found on sites with fewer links to them.
Replacing the qualitative criterion of "relevance" with the quantitative
criterion of "popularity" not only proved to be tremendously practical
but also extremely consequential, for search engines not only describe
the world; they create it as well. That which search engines put at the
top of this list is not just already popular but will remain so. A third
of all users click on the first search result, and around 95 percent do
not look past the first 10.[^114^](#c2-note-0114){#c2-note-0114a} Even
the earliest version of the PageRank algorithm did not represent
existing reality but rather (co-)constituted it.

Popularity, however, is not the only element with which algorithms
actively give shape to the user\'s world. A search engine can only sort,
weigh, and make available that portion of information which has already
been incorporated into its index. Everything else remains invisible. The
relation between []{#Page_119 type="pagebreak" title="119"}the recorded
part of the internet (the "surface web") and the unrecorded part (the
"deep web") is difficult to determine. Estimates have varied between
ratios of 1:5 and 1:500.[^115^](#c2-note-0115){#c2-note-0115a} There are
many reasons why content might be inaccessible to search engines.
Perhaps the information has been saved in formats that search engines
cannot read or can only poorly read, or perhaps it has been hidden
behind proprietary barriers such as paywalls. In order to expand the
realm of things that can be exploited by their algorithms, the operators
of search engines offer extensive guidance about how providers should
design their sites so that search tools can find them in an optimal
manner. It is not necessary to follow this guidance, but given the
central role of search engines in sorting and filtering information, it
is clear that they exercise a great deal of power by setting the
standards.[^116^](#c2-note-0116){#c2-note-0116a}

That the individual must "voluntarily" submit to this authority is
typical of the power of networks, which do not give instructions but
rather constitute preconditions. Yet it is in the interest of (almost)
every producer of information to optimize its position in a search
engine\'s index, and thus there is a strong incentive to accept the
preconditions in question. Considering, moreover, the nearly
monopolistic character of many providers of algorithmically generated
orders and the high price that one would have to pay if one\'s own site
were barely (or not at all) visible to others, the term "voluntary"
begins to take on a rather foul taste. This is a more or less subtle way
of pre-formatting the world so that it can be optimally recorded by
algorithms.[^117^](#c2-note-0117){#c2-note-0117a}

The providers of search engines usually justify such methods in the name
of offering "more efficient" services and "more relevant" results.
Ostensibly technical and neutral terms such as "efficiency" and
"relevance" do little, however, to conceal the political nature of
defining variables. Efficient with respect to what? Relevant for whom?
These are issues that are decided without much discussion by the
developers and institutions that regard the algorithms as their own
property. Every now and again such questions incite public debates,
mostly when the interests of one provider happen to collide with those
of its competition. Thus, for instance, the initiative known as
FairSearch has argued that Google abuses its market power as a search
engine to privilege its []{#Page_120 type="pagebreak" title="120"}own
content and thus to showcase it prominently in search
results.[^118^](#c2-note-0118){#c2-note-0118a} FairSearch\'s
representatives alleged, for example, that Google favors its own map
service in the case of address searches and its own price comparison
service in the case of product searches. The argument had an effect. In
November of 2010, the European Commission initiated an antitrust
investigation against Google. In 2014, a settlement was proposed that
would have required the American internet giant to pay certain
concessions, but the members of the Commission, the EU Parliament, and
consumer protection agencies were not satisfied with the agreement. In
April 2015, the anti-trust proceedings were recommenced by a newly
appointed Commission, its reasoning being that "Google does not apply to
its own comparison shopping service the system of penalties which it
applies to other comparison shopping services on the basis of defined
parameters, and which can lead to the lowering of the rank in which they
appear in Google\'s general search results
pages."[^119^](#c2-note-0119){#c2-note-0119a} In other words, the
Commission accused the company of manipulating search results to its own
advantage and the disadvantage of users.

This is not the only instance in which the political side of search
algorithms has come under public scrutiny. In the summer of 2012, Google
announced that sites with higher numbers of copyright removal notices
would henceforth appear lower in its
rankings.[^120^](#c2-note-0120){#c2-note-0120a} The company thus
introduced explicitly political and economic criteria in order to
influence what, according to the standards of certain powerful players
(such as film studios), users were able to
view.[^121^](#c2-note-0121){#c2-note-0121a} In this case, too, it would
be possible to speak of the personalization of searching, except that
the heart of the situation was not the natural person of the user but
rather the juridical person of the copyright holder. It was according to
the latter\'s interests and preferences that searching was being
reoriented. Amazon has employed similar tactics. In 2014, the online
merchant changed its celebrated recommendation algorithm with the goal
of reducing the presence of books released by irritating publishers that
dared to enter into price negotiations with the
company.[^122^](#c2-note-0122){#c2-note-0122a}

Controversies over the methods of Amazon or Google, however, are the
exception rather than the rule. Necessary (but never neutral) decisions
about recording and evaluating data []{#Page_121 type="pagebreak"
title="121"}with algorithms are being made almost all the time without
any discussion whatsoever. The logic of the original Page­Rank algorithm
was criticized as early as the year 2000 for essentially representing
the commercial logic of mass media, systematically disadvantaging
less-popular though perhaps otherwise relevant information, and thus
undermining the "substantive vision of the web as an inclusive
democratic space."[^123^](#c2-note-0123){#c2-note-0123a} The changes to
the search algorithm that have been adopted since then may have modified
this tendency, but they have certainly not weakened it. In addition to
concentrating on what is popular, the new variables privilege recently
uploaded and constantly updated content. The selection of search results
is now contingent upon the location of the user, and it takes into
account his or her social networking. It is oriented toward the average
of a dynamically modeled group. In other words, Google\'s new algorithm
favors that which is gaining popularity within a user\'s social network.
The global village is thus becoming more and more
provincial.[^124^](#c2-note-0124){#c2-note-0124a}
:::

::: {.section}
### Data behaviorism {#c2-sec-0026}

Algorithms such as Google\'s thus reiterate and reinforce a tendency
that has already been apparent on both the level of individual users and
that of communal formations: in order to deal with the vast amounts and
complexity of information, they direct their gaze inward, which is not
to say toward the inner being of individual people. As a level of
reference, the individual person -- with an interior world and with
ideas, dreams, and wishes -- is irrelevant. For algorithms, people are
black boxes that can only be understood in terms of their reactions to
stimuli. Consciousness, perception, and intention do not play any role
for them. In this regard, the legal philosopher Antoinette Rouvroy has
written about "data behaviorism."[^125^](#c2-note-0125){#c2-note-0125a}
With this, she is referring to the gradual return of a long-discredited
approach to behavioral psychology that postulated that human behavior
could be explained, predicted, and controlled purely by our outwardly
observable and measurable actions.[^126^](#c2-note-0126){#c2-note-0126a}
Psychological dimensions were ignored (and are ignored in this new
version of behaviorism) because it is difficult to observe them
empiric­ally. Accordingly, this approach also did away with the need
[]{#Page_122 type="pagebreak" title="122"}to question people directly or
take into account their subjective experiences, thoughts, and feelings.
People were regarded (and are so again today) as unreliable, as poor
judges of themselves, and as only partly honest when disclosing
information. Any strictly empirical science, or so the thinking went,
required its practitioners to disregard everything that did not result
in physical and observable action. From this perspective, it was
possible to break down even complex behavior into units of stimulus and
reaction. This led to the conviction that someone observing another\'s
activity always knows more than the latter does about himself or herself
for, unlike the person being observed, whose impressions can be
inaccurate, the observer is in command of objective and complete
information. Even early on, this approach faced a wave of critique. It
was held to be mechanistic, reductionist, and authoritarian because it
privileged the observing scientist over the subject. In practice, it
quickly ran into its own limitations: it was simply too expensive and
complicated to gather data about human behavior.

Yet that has changed radically in recent years. It is now possible to
measure ever more activities, conditions, and contexts empirically.
Algorithms like Google\'s or Amazon\'s form the technical backdrop for
the revival of a mechanistic, reductionist, and authoritarian approach
that has resurrected the long-lost dream of an objective view -- the
view from nowhere.[^127^](#c2-note-0127){#c2-note-0127a} Every critique
of this positivistic perspective -- that every measurement result, for
instance, reflects not only the measured but also the measurer -- is
brushed aside with reference to the sheer amounts of data that are now
at our disposal.[^128^](#c2-note-0128){#c2-note-0128a} This attitude
substantiates the claim of those in possession of these new and
comprehensive powers of observation (which, in addition to Google and
Facebook, also includes the intelligence services of Western nations),
namely that they know more about individuals than individuals know about
themselves, and are thus able to answer our questions before we ask
them. As mentioned above, this is a goal that Google expressly hopes to
achieve.

At issue with this "inward turn" is thus the space of communal
formations, which is constituted by the sum of all of the activities of
their interacting participants. In this case, however, a communal
formation is not consciously created []{#Page_123 type="pagebreak"
title="123"}and maintained in a horizontal process, but rather
synthetic­ally constructed as a computational function. Depending on the
context and the need, individuals can either be assigned to this
function or removed from it. All of this happens behind the user\'s back
and in accordance with the goals and pos­itions that are relevant to the
developers of a given algorithm, be it to optimize profit or
surveillance, create social norms, improve services, or whatever else.
The results generated in this way are sold to users as a personalized
and efficient service that provides a quasi-magical product. Out of the
enormous haystack of searchable information, results are generated that
are made to seem like the very needle that we have been looking for. At
best, it is only partially transparent how these results came about and
which positions in the world are strengthened or weakened by them. Yet,
as long as the needle is somewhat functional, most users are content,
and the algorithm registers this contentedness to validate itself. In
this dynamic world of unmanageable complexity, users are guided by a
sort of radical, short-term pragmatism. They are happy to have the world
pre-sorted for them in order to improve their activity in it. Regarding
the matter of whether the information being provided represents the
world accurately or not, they are unable to formulate an adequate
assessment for themselves, for it is ultimately impossible to answer
this question without certain resources. Outside of rapidly shrinking
domains of specialized or everyday know­ledge, it is becoming
increasingly difficult to gain an overview of the world without
mechanisms that pre-sort it. Users are only able to evaluate search
results pragmatically; that is, in light of whether or not they are
helpful in solving a concrete problem. In this regard, it is not
paramount that they find the best solution or the correct answer but
rather one that is available and sufficient. This reality lends an
enormous amount of influence to the institutions and processes that
provide the solutions and answers.[]{#Page_124 type="pagebreak"
title="124"}
:::
:::

::: {.section .notesSet type="rearnotes"}
[]{#notesSet}Notes {#c2-ntgp-9999}
------------------

::: {.section .notesList}
[1](#c2-note-0001a){#c2-note-0001}  André Rottmann, "Reflexive Systems
of Reference: Approximations to 'Referentialism' in Contemporary Art,"
trans. Gerrit Jackson, in Dirk Snauwaert et al. (eds), *Rehabilitation:
The Legacy of the Modern Movement* (Ghent: MER, 2010), pp. 97--106, at
99.

[2](#c2-note-0002a){#c2-note-0002}  The recognizability of the sources
distinguishes these processes from plagiarism. The latter operates with
the complete opposite aim, namely that of borrowing sources without
acknow­ledging them.

[3](#c2-note-0003a){#c2-note-0003}  Ulf Poschardt, *DJ Culture* (London:
Quartet Books, 1998), p. 34.

[4](#c2-note-0004a){#c2-note-0004}  Theodor W. Adorno, *Aesthetic
Theory*, trans. Robert Hullot-Kentor (Minneapolis, MN: University of
Minnesota Press, 1997), p. 151.

[5](#c2-note-0005a){#c2-note-0005}  Peter Bürger, *Theory of the
Avant-Garde*, trans. Michael Shaw (Minneapolis, MN: University of
Minnesota Press, 1984).

[6](#c2-note-0006a){#c2-note-0006}  Felix Stalder, "Neun Thesen zur
Remix-Kultur," *i-rights.info* (May 25, 2009), online.

[7](#c2-note-0007a){#c2-note-0007}  Florian Cramer, *Exe.cut(up)able
Statements: Poetische Kalküle und Phantasmen des selbstausführenden
Texts* (Munich: Wilhelm Fink, 2011), pp. 9--10 \[--trans.\]

[8](#c2-note-0008a){#c2-note-0008}  McLuhan stressed that, despite using
the alphabet, every manuscript is unique because it not only depended on
the sequence of letters but also on the individual ability of a given
scribe to []{#Page_185 type="pagebreak" title="185"}lend these letters a
particular shape. With the rise of the printing press, the alphabet shed
these last elements of calligraphy and became typography.

[9](#c2-note-0009a){#c2-note-0009}  Elisabeth L. Eisenstein, *The
Printing Revolution in Early Modern Europe* (Cambridge: Cambridge
University Press, 1983), p. 15.

[10](#c2-note-0010a){#c2-note-0010}  Eisenstein, *The Printing
Revolution in Early Modern Europe*, p. 204.

[11](#c2-note-0011a){#c2-note-0011}  The fundamental aspects of these
conventions were formulated as early as the beginning of the sixteenth
century; see Michael Giesecke, *Der Buchdruck in der frühen Neuzeit:
Eine historische Fallstudie über die Durchsetzung neuer Informations-
und Kommunikationstechnologien* (Frankfurt am Main: Suhrkamp, 1991), pp.
420--40.

[12](#c2-note-0012a){#c2-note-0012}  Eisenstein, *The Printing
Revolution in Early Modern Europe*, p. 49.

[13](#c2-note-0013a){#c2-note-0013}  In April 2014, the Authors Guild --
the association of American writers that had sued Google -- filed an
appeal to overturn the decision and made a public statement demanding
that a new organization be established to license the digital rights of
out-of-print books. See "Authors Guild: Amazon was Google's Target,"
*The Authors Guild: Industry & Advocacy News* (April 11, 2014), online.
In October 2015, however, the next-highest authority -- the United
States Court of Appeals for the Second Circuit -- likewise decided in
Google\'s favor. The Authors Guild promptly announced its intention to
take the case to the Supreme Court.

[14](#c2-note-0014a){#c2-note-0014}  Jean-Noël Jeanneney, *Google and
the Myth of Universal Knowledge: A View from Europe*, trans. Teresa
Lavender Fagan (Chicago, IL: University of Chicago Press, 2007).

[15](#c2-note-0015a){#c2-note-0015}  Within the framework of the Images
for the Future project (2007--14), the Netherlands alone invested more
than €170 million to digitize the collections of the most important
audiovisual archives. Over 10 years, the cost of digitizing the entire
cultural heritage of Europe has been estimated to be around €100
billion. See Nick Poole, *The Cost of Digitising Europe\'s Cultural
Heritage: A Report for the Comité des Sages of the European Commission*
(November 2010), online.

[16](#c2-note-0016a){#c2-note-0016}  Richard Darnton, "The National
Digital Public Library Is Launched!", *New York Review of Books* (April
25, 2013), online.

[17](#c2-note-0017a){#c2-note-0017}  According to estimates by the
British Library, so-called "orphan works" alone -- that is, works still
legally protected but whose right holders are unknown -- make up around
40 percent of the books in its collection that still fall under
copyright law. In an effort to alleviate this problem, the European
Parliament and the European Commission issued a directive []{#Page_186
type="pagebreak" title="186"}in 2012 concerned with "certain permitted
uses of orphan works." This has allowed libraries and archives to make
works available online without permission if, "after carrying out
diligent searches," the copyright holders cannot be found. What
qualifies as a "diligent search," however, is so strictly formulated
that the German Library Association has called the directive
"impracticable." Deutscher Bibliotheksverband, "Rechtlinie über
bestimmte zulässige Formen der Nutzung verwaister Werke" (February 27,
2012), online.

[18](#c2-note-0018a){#c2-note-0018}  UbuWeb, "Frequently Asked
Questions," online.

[19](#c2-note-0019a){#c2-note-0019}  The numbers in this area of
activity are notoriously unreliable, and therefore only rough estimates
are possible. It seems credible, however, that the Pirate Bay was
attracting around a billion page views per month by the end of 2013.
That would make it the seventy-fourth most popular internet destination.
See Ernesto, "Top 10 Most Popular Torrent Sites of 2014" (January 4,
2014), online.

[20](#c2-note-0020a){#c2-note-0020}  See the documentary film *TPB AFK:
The Pirate Bay Away from Keyboard* (2013), directed by Simon Klose.

[21](#c2-note-0021a){#c2-note-0021}  In technical terms, there is hardly
any difference between a "stream" and a "download." In both cases, a
complete file is transferred to the user\'s computer and played.

[22](#c2-note-0022a){#c2-note-0022}  The practice is legal in Germany
but illegal in Austria, though digitized texts are routinely made
available there in seminars. See Seyavash Amini Khanimani and Nikolaus
Forgó, "Rechtsgutachten über die Erforderlichkeit einer freien
Werknutzung im österreichischen Urheberrecht zur Privilegierung
elektronisch unterstützter Lehre," *Forum Neue Medien Austria* (January
2011), online.

[23](#c2-note-0023a){#c2-note-0023}  Deutscher Bibliotheksverband,
"Digitalisierung" (2015), online \[--trans\].

[24](#c2-note-0024a){#c2-note-0024}  David Weinberger, *Everything Is
Miscellaneous: The Power of the New Digital Disorder* (New York: Times
Books, 2007).

[25](#c2-note-0025a){#c2-note-0025}  This is not a question of material
wealth. Those who are economically or socially marginalized are
confronted with the same phenomenon. Their primary experience of this
excess is with cheap goods and junk.

[26](#c2-note-0026a){#c2-note-0026}  See Gregory Bateson, "Form,
Substance and Difference," in Bateson, *Steps to an Ecology of Mind:
Collected Essays in Anthropology, Psychiatry, Evolution and
Epistemology* (London: Jason Aronson, 1972), pp. 455--71, at 460:
"\[I\]n fact, what we mean by information -- the elementary unit of
information -- is *a difference which makes a difference*" (the emphasis
is original).

[27](#c2-note-0027a){#c2-note-0027}  Inke Arns and Gabriele Horn,
*History Will Repeat Itself* (Frankfurt am Main: Revolver, 2007), p.
42.[]{#Page_187 type="pagebreak" title="187"}

[28](#c2-note-0028a){#c2-note-0028}  See the film *The Battle of
Orgreave* (2001), directed by Mike Figgis.

[29](#c2-note-0029a){#c2-note-0029}  Theresa Winge, "Costuming the
Imagination: Origins of Anime and Manga Cosplay," *Mechademia* 1 (2006),
pp. 65--76.

[30](#c2-note-0030a){#c2-note-0030}  Nicolle Lamerichs, "Stranger than
Fiction: Fan Identity in Cosplay," *Transformative Works and Cultures* 7
(2011), online.

[31](#c2-note-0031a){#c2-note-0031}  The *Oxford English Dictionary*
defines "selfie" as a "photographic self-portrait; *esp*. one taken with
a smartphone or webcam and shared via social media."

[32](#c2-note-0032a){#c2-note-0032}  Odin Kroeger et al. (eds),
*Geistiges Eigentum und Originalität: Zur Politik der Wissens- und
Kulturproduktion* (Vienna: Turia + Kant, 2011).

[33](#c2-note-0033a){#c2-note-0033}  Roland Barthes, "The Death of the
Author," in Barthes, *Image -- Music -- Text*, trans. Stephen Heath
(London: Fontana Press, 1977), pp. 142--8.

[34](#c2-note-0034a){#c2-note-0034}  Heinz Rölleke and Albert
Schindehütte, *Es war einmal: Die wahren Märchen der Brüder Grimm und
wer sie ihnen erzählte* (Frankfurt am Main: Eichborn, 2011); and Heiner
Boehncke, *Marie Hassenpflug: Eine Märchenerzählerin der Brüder Grimm*
(Darmstadt: Von Zabern, 2013).

[35](#c2-note-0035a){#c2-note-0035}  Hansjörg Ewert, "Alles nur
geklaut?", *Zeit Online* (February 26, 2013), online. This is not a new
realization but has long been a special area of research for
musicologists. What is new, however, is that it is no longer
controversial outside of this narrow disciplinary discourse. See Peter
J. Burkholder, "The Uses of Existing Music: Musical Borrowing as a
Field," *Notes* 50 (1994), pp. 851--70.

[36](#c2-note-0036a){#c2-note-0036}  Zygmunt Bauman, *Liquid Modernity*
(Cambridge: Polity, 2000), p. 56.

[37](#c2-note-0037a){#c2-note-0037}  Quoted from Eran Schaerf\'s audio
installation *FM-Scenario: Reality Race* (2013), online.

[38](#c2-note-0038a){#c2-note-0038}  The number of members, for
instance, of the two large polit­ical parties in Germany, the Social
Democratic Party and the Christian Democratic Union, reached its peak at
the end of the 1970s or the beginning of the 1980s. Both were able to
increase their absolute numbers for a brief time at the beginning of the
1990s, when the Christian Democratic Party even reached its absolute
high point, but this can be explained by a surge in new members after
reunification. By 2010, both parties already had fewer members than
Greenpeace, whose 580,000 members make it Germany's largest NGO.
Parallel to this, between 1970 and 2010, the proportion of people
without any religious affiliations shrank to approximately 37 percent.
That there are more churches and political parties today is indicative
of how difficult []{#Page_188 type="pagebreak" title="188"}it has become
for any single organization to attract broad strata of society.

[39](#c2-note-0039a){#c2-note-0039}  Ulrich Beck, *Risk Society: Towards
a New Modernity*, trans. Mark Ritter (London: SAGE, 1992), p. 135.

[40](#c2-note-0040a){#c2-note-0040}  Ferdinand Tönnies, *Community and
Society*, trans. Charles P. Loomis (East Lansing: Michigan State
University Press, 1957).

[41](#c2-note-0041a){#c2-note-0041}  Karl Marx and Friedrich Engels,
"The Manifesto of the Communist Party (1848)," trans. Terrell Carver, in
*The Cambridge Companion to the Communist Manifesto*, ed. Carver and
James Farr (Cambridge: Cambridge University Press, 2015), pp. 237--60,
at 239. For Marx and Engels, this was -- like everything pertaining to
the dynamics of capitalism -- a thoroughly ambivalent development. For,
in this case, it finally forced people "to take a down-to-earth view of
their circumstances, their multifarious relationships" (ibid.).

[42](#c2-note-0042a){#c2-note-0042}  As early as the 1940s, Karl Polanyi
demonstrated in *The Great Transformation* (New York: Farrar & Rinehart,
1944) that the idea of strictly separated spheres, which are supposed to
be so typical of society, is in fact highly ideological. He argued above
all that the attempt to implement this separation fully and consistently
in the form of the free market would destroy the foundations of society
because both the life of workers and the environment of the market
itself would be regarded as externalities. For a recent adaptation of
this argument, see David Graeber, *Debt: The First 5000 Years* (New
York: Melville House, 2011).

[43](#c2-note-0043a){#c2-note-0043}  Tönnies's persistent influence can
be felt, for instance, in Zygmunt Bauman's negative assessment of the
compunction to strive for community in his *Community: Seeking Safety in
an Insecure World* (Malden, MA: Blackwell, 2001).

[44](#c2-note-0044a){#c2-note-0044}  See, for example, Amitai Etzioni,
*The Third Way to a Good Society* (London: Demos, 2000).

[45](#c2-note-0045a){#c2-note-0045}  Jean Lave and Étienne Wenger,
*Situated Learning: Legitimate Peripheral Participation* (Cambridge:
Cambridge University Press, 1991), p. 98.

[46](#c2-note-0046a){#c2-note-0046}  Étienne Wenger, *Cultivating
Communities of Practice: A Guide to Managing Knowledge* (Boston, MA:
Harvard Business School Press, 2000).

[47](#c2-note-0047a){#c2-note-0047}  The institutions of the
disciplinary society -- schools, factories, prisons and hospitals, for
instance -- were closed. Whoever was inside could not get out.
Participation was obligatory, and instructions had to be followed. See
Michel Foucault, *Discipline and Punish: The Birth of the Prison*,
trans. Alan Sheridan (New York: Pantheon Books, 1977).[]{#Page_189
type="pagebreak" title="189"}

[48](#c2-note-0048a){#c2-note-0048}  Weber famously defined power as
follows: "Power is the probability that one actor within a social
relationship will be in a position to carry out his own will despite
resistance, regardless of the basis on which this probability rests."
Max Weber, *Economy and Society: An Outline of Interpretive Sociology*,
trans. Guenther Roth and Claus Wittich (Berkeley, CA: University of
California Press, 1978), p. 53.

[49](#c2-note-0049a){#c2-note-0049}  For those in complete despair, the
following tip is provided: "To get more likes, start liking the photos
of random people." Such a strategy, it seems, is more likely to increase
than decrease one's hopelessness. The quotations are from "How to Get
More Likes on Your Instagram Photos," *WikiHow* (2016), online.

[50](#c2-note-0050a){#c2-note-0050}  Jeremy Gilbert, *Democracy and
Collectivity in an Age of Individualism* (London: Pluto Books, 2013).

[51](#c2-note-0051a){#c2-note-0051}  Diedrich Diederichsen,
*Eigenblutdoping: Selbstverwertung, Künstlerromantik, Partizipation*
(Cologne: Kiepenheuer & Witsch, 2008).

[52](#c2-note-0052a){#c2-note-0052}  Harrison Rainie and Barry Wellman,
*Networked: The New Social Operating System* (Cambridge, MA: MIT Press,
2012). The term is practical because it is easy to understand, but it is
also conceptually contradictory. An individual (an indivisible entity)
cannot be defined in terms of a distributed network. With a nod toward
Gilles Deleuze, the cumbersome but theoretically more precise term
"dividual" (the divisible) has also been used. See Gerald Raunig,
"Dividuen des Facebook: Das neue Begehren nach Selbstzerteilung," in
Oliver Leistert and Theo Röhle (eds), *Generation Facebook: Über das
Leben im Social Net* (Bielefeld: Transcript, 2011), pp. 145--59.

[53](#c2-note-0053a){#c2-note-0053}  Jariu Saramäki et al., "Persistence
of Social Signatures in Human Communication," *Proceedings of the
National Academy of Sciences of the United States of America* 111
(2014): 942--7.

[54](#c2-note-0054a){#c2-note-0054}  The term "weak ties" derives from a
study of where people find out information about new jobs. As the study
shows, this information does not usually come from close friends, whose
level of knowledge often does not differ much from that of the person
looking for a job, but rather from loose acquaintances, whose living
environments do not overlap much with one\'s own and who can therefore
make information available from outside of one\'s own network. See Mark
Granovetter, "The Strength of Weak Ties," *American Journal of
Sociology* 78 (1973): 1360--80.

[55](#c2-note-0055a){#c2-note-0055}  Castells, *The Power of Identity*,
420.

[56](#c2-note-0056a){#c2-note-0056}  Ulf Weigelt, "Darf der Chef
ständige Erreichbarkeit ver­langen?" *Zeit Online* (June 13, 2012),
online \[--trans.\].[]{#Page_190 type="pagebreak" title="190"}

[57](#c2-note-0057a){#c2-note-0057}  Hartmut Rosa, *Social Acceleration:
A New Theory of Modernity*, trans. Jonathan Trejo-Mathys (New York:
Columbia University Press, 2013).

[58](#c2-note-0058a){#c2-note-0058}  This technique -- "social freezing"
-- has already become so standard that it is now regarded as way to help
women achieve a better balance between work and family life. See Kolja
Rudzio "Social Freezing: Ein Kind von Apple," *Zeit Online* (November 6,
2014), online.

[59](#c2-note-0059a){#c2-note-0059}  See the film *Into Eternity*
(2009), directed by Michael Madsen.

[60](#c2-note-0060a){#c2-note-0060}  Thomas S. Kuhn, *The Structure of
Scientific Revolutions*, 3rd edn (Chicago, IL: University of Chicago
Press, 1996).

[61](#c2-note-0061a){#c2-note-0061}  Werner Busch and Peter Schmoock,
*Kunst: Die Geschichte ihrer Funktionen* (Weinheim: Quadriga/Beltz,
1987), p. 179 \[--trans.\].

[62](#c2-note-0062a){#c2-note-0062}  "'When Attitude Becomes Form' at
the Fondazione Prada," *Contemporary Art Daily* (September 18, 2013),
online.

[63](#c2-note-0063a){#c2-note-0063}  Owing to the hyper-capitalization
of the art market, which has been going on since the 1990s, this role
has shifted somewhat from curators to collectors, who, though validating
their choices more on financial than on argumentative grounds, are
essentially engaged in the same activity. Today, leading cur­ators
usually work closely together with collectors and thus deal with more
money than the first generation of curators ever could have imagined.

[64](#c2-note-0064a){#c2-note-0064}  Diedrich Diederichsen, "Showfreaks
und Monster," *Texte zur Kunst* 71 (2008): 69--77.

[65](#c2-note-0065a){#c2-note-0065}  Alexander R. Galloway, *Protocol:
How Control Exists after Decentralization* (Cambridge, MA: MIT Press,
2004), pp. 7, 75.

[66](#c2-note-0066a){#c2-note-0066}  Even the *Frankfurter Allgemeine
Zeitung* -- at least in its online edition -- has begun to publish more
and more articles in English. The newspaper has accepted the
disadvantage of higher editorial costs in order to remain relevant in
the increasingly globalized debate.

[67](#c2-note-0067a){#c2-note-0067}  Joseph Reagle, "'Free as in
Sexist?' Free Culture and the Gender Gap," *First Monday* 18 (2013),
online.

[68](#c2-note-0068a){#c2-note-0068}  Wikipedia\'s own "Editor Survey"
from 2011 reports a women\'s quota of 9 percent. Other studies have come
to a slightly higher number. See Benjamin Mako Hill and Aaron Shaw, "The
Wikipedia Gender Gap Revisited: Characterizing Survey Response Bias with
Propensity Score Estimation," *PLOS ONE* 8 (July 26, 2013), online. The
problem is well known, and the Wikipedia Foundation has been making
efforts to correct matters. In 2011, its goal was to increase the
participation of women to 25 percent by 2015. This has not been
achieved.[]{#Page_191 type="pagebreak" title="191"}

[69](#c2-note-0069a){#c2-note-0069}  Shyong (Tony) K. Lam et al. (2011),
"WP: Clubhouse? An Exploration of Wikipedia's Gender Imbalance,"
*WikiSym* 11 (2011), online.

[70](#c2-note-0070a){#c2-note-0070}  David Singh Grewal, *Network Power:
The Social Dynamics of Globalization* (New Haven, CT: Yale University
Press, 2008).

[71](#c2-note-0071a){#c2-note-0071}  Ibid., p. 29.

[72](#c2-note-0072a){#c2-note-0072}  Niklas Luhmann, *Macht im System*
(Berlin: Suhrkamp, 2013), p. 52 \[--trans.\].

[73](#c2-note-0073a){#c2-note-0073}  Mathieu O\'Neil, *Cyberchiefs:
Autonomy and Authority in Online Tribes* (London: Pluto Press, 2009).

[74](#c2-note-0074a){#c2-note-0074}  Eric Steven Raymond, "The Cathedral
and the Bazaar," *First Monday* 3 (1998), online.

[75](#c2-note-0075a){#c2-note-0075}  Jorge Luis Borges, "The Library of
Babel," trans. Anthony Kerrigan, in Borges, *Ficciones* (New York: Grove
Weidenfeld, 1962), pp. 79--88.

[76](#c2-note-0076a){#c2-note-0076}  Heinrich Geiselberger and Tobias
Moorstedt (eds), *Big Data: Das neue Versprechen der Allwissenheit*
(Berlin: Suhrkamp, 2013).

[77](#c2-note-0077a){#c2-note-0077}  This is one of the central tenets
of science and technology studies. See, for instance, Geoffrey C. Bowker
and Susan Leigh Star, *Sorting Things Out: Classification and Its
Consequences* (Cambridge, MA: MIT Press, 1999).

[78](#c2-note-0078a){#c2-note-0078}  Sybille Krämer, *Symbolische
Maschinen: Die Idee der Formalisierung in geschichtlichem Abriß*
(Darmstadt: Wissenschaft­liche Buchgesellschaft, 1988), 50--69.

[79](#c2-note-0079a){#c2-note-0079}  Quoted from Doron Swade, "The
'Unerring Certainty of Mechanical Agency': Machines and Table Making in
the Nineteenth Century," in Martin Campbell-Kelly et al. (eds), *The
History of Mathematical Tables: From Sumer to Spreadsheets* (Oxford:
Oxford University Press, 2003), pp. 145--76, at 150.

[80](#c2-note-0080a){#c2-note-0080}  The mechanical construction
suggested by Leibniz was not to be realized as a practically usable (and
therefore patentable) calculating machine until 1820, by which point it
was referred to as an "arithmometer."

[81](#c2-note-0081a){#c2-note-0081}  Krämer, *Symbolische Maschinen*, 98
\[--trans.\].

[82](#c2-note-0082a){#c2-note-0082}  Charles Babbage, *On the Economy of
Machinery and Manufactures* (London: Charles Knight, 1832), p. 153: "We
have already mentioned what may, perhaps, appear paradoxical to some of
our readers -- that the division of labour can be applied with equal
success to mental operations, and that it ensures, by its adoption, the
same economy of time."

[83](#c2-note-0083a){#c2-note-0083}  This structure, which is known as
"Von Neumann architecture," continues to form the basis of almost all
computers.

[84](#c2-note-0084a){#c2-note-0084}  "Gordon Moore Says Aloha to
Moore\'s Law," *The Inquirer* (April 13, 2005), online.[]{#Page_192
type="pagebreak" title="192"}

[85](#c2-note-0085a){#c2-note-0085}  Miriam Meckel, *Next: Erinnerungen
an eine Zukunft ohne uns* (Reinbeck bei Hamburg: Rowohlt, 2011). One
could also say that this anxiety has been caused by the fact that the
automation of labor has begun to affect middle-class jobs as well.

[86](#c2-note-0086a){#c2-note-0086}  Steven Levy, "Can an Algorithm
Write a Better News Story than a Human Reporter?" *Wired* (April 24,
2012), online.

[87](#c2-note-0087a){#c2-note-0087}  Alexander Pschera, *Animal
Internet: Nature and the Digital Revolution*, trans. Elisabeth Laufer
(New York: New Vessel Press, 2016).

[88](#c2-note-0088a){#c2-note-0088}  The American intelligence services
are not unique in this regard. *Spiegel* has reported that, in Russia,
entire "bot armies" have been mobilized for the "propaganda battle."
Benjamin Bidder, "Nemzow-Mord: Die Propaganda der russischen Hardliner,"
*Spiegel Online* (February 28, 2015), online.

[89](#c2-note-0089a){#c2-note-0089}  Lennart Guldbrandsson, "Swedish
Wikipedia Surpasses 1 Million Articles with Aid of Article Creation
Bot," [blog.wikimedia.org](http://blog.wikimedia.org) (June 17, 2013),
online.

[90](#c2-note-0090a){#c2-note-0090}  Thomas Bunnell, "The Mathematics of
Film," *Boom Magazine* (November 2007): 48--51.

[91](#c2-note-0091a){#c2-note-0091}  Christopher Steiner, "Automatons
Get Creative," *Wall Street Journal* (August 17, 2012), online.

[92](#c2-note-0092a){#c2-note-0092}  "The Hewlett Foundation: Automated
Essay Scoring," [kaggle.com](http://kaggle.com) (February 10, 2012),
online.

[93](#c2-note-0093a){#c2-note-0093}  Ian Ayres, *Super Crunchers: How
Anything Can Be Predicted* (London: Bookpoint, 2007).

[94](#c2-note-0094a){#c2-note-0094}  Each of these models was tested on
the basis of the 50 million most common search terms from the years
2003--8 and classified according to the time and place of the search.
The results were compared with data from the health authorities. See
Jeremy Ginsberg et al., "Detecting Influenza Epidemics Using Search
Engine Query Data," *Nature* 457 (2009): 1012--4.

[95](#c2-note-0095a){#c2-note-0095}  In absolute terms, the rate of
correct hits, at 15.8 percent, was still relatively low. With the same
dataset, however, random guessing would only have an accuracy of 0.005
percent. See V. Le Quoc et al., "Building High-Level Features Using
Large-Scale Unsupervised Learning,"
[research.google.com](http://research.google.com) (2012), online.

[96](#c2-note-0096a){#c2-note-0096}  Neil Johnson et al., "Abrupt Rise
of New Machine Ecology beyond Human Response Time," *Nature: Scientific
Reports* 3 (2013), online. The authors counted 18,520 of these events
between January 2006 and February 2011; that is, about 15 per day on
average.

[97](#c2-note-0097a){#c2-note-0097}  Gerald Nestler, "Mayhem in Mahwah:
The Case of the Flash Crash; or, Forensic Re-performance in Deep Time,"
in Anselm []{#Page_193 type="pagebreak" title="193"}Franke et al. (eds),
*Forensis: The Architecture of Public Truth* (Berlin: Sternberg Press,
2014), pp. 125--46.

[98](#c2-note-0098a){#c2-note-0098}  Another facial recognition
algorithm by Google provides a good impression of the rate of progress.
As early as 2011, the latter was able to identify dogs in images with 80
percent accuracy. Three years later, this rate had not only increased to
93.5 percent (which corresponds to human capabilities), but the
algorithm could also identify more than 200 different types of dog,
something that hardly any person can do. See Robert McMillan, "This Guy
Beat Google\'s Super-Smart AI -- But It Wasn\'t Easy," *Wired* (January
15, 2015), online.

[99](#c2-note-0099a){#c2-note-0099}  Sergey Brin and Lawrence Page, "The
Anatomy of a Large-Scale Hypertextual Web Search Engine," *Computer
Networks and ISDN Systems* 30 (1998): 107--17.

[100](#c2-note-0100a){#c2-note-0100}  Eugene Garfield, "Citation Indexes
for Science: A New Dimension in Documentation through Association of
Ideas," *Science* 122 (1955): 108--11.

[101](#c2-note-0101a){#c2-note-0101}  Since 1964, the data necessary for
this has been published as the Science Citation Index (SCI).

[102](#c2-note-0102a){#c2-note-0102}  The assumption that the subjects
produce these structures indirectly and without any strategic intention
has proven to be problematic in both contexts. In the world of science,
there are so-called citation cartels -- groups of scientists who
frequently refer to one another\'s work in order to improve their
respective position in the SCI. Search engines have likewise given rise
to search engine optimizers, which attempt by various means to optimize
a website\'s evaluation by search engines.

[103](#c2-note-0103a){#c2-note-0103}  Regarding the history of the SCI
and its influence on the early version of Google\'s PageRank, see Katja
Mayer, "Zur Soziometrik der Suchmaschinen: Ein historischer Überblick
der Methodik," in Konrad Becker and Felix Stalder (eds), *Deep Search:
Die Politik des Suchens jenseits von Google* (Innsbruck: Studienverlag,
2009), pp. 64--83.

[104](#c2-note-0104a){#c2-note-0104}  A site with zero links to it could
not be registered by the algorithm at all, for the search engine indexed
the web by having its "crawler" follow the links itself.

[105](#c2-note-0105a){#c2-note-0105}  "Google Algorithm Change History,"
[moz.com](http://moz.com) (2016), online.

[106](#c2-note-0106a){#c2-note-0106}  Martin Feuz et al., "Personal Web
Searching in the Age of Semantic Capitalism: Diagnosing the Mechanisms
of Personalisation," *First Monday* 17 (2011), online.

[107](#c2-note-0107a){#c2-note-0107}  Brian Dean, "Google\'s 200 Ranking
Factors," *Search Engine Journal* (May 31, 2013), online.

[108](#c2-note-0108a){#c2-note-0108}  Thus, it is not only the world of
advertising that motivates the collection of personal information. Such
information is also needed for the development of personalized
algorithms that []{#Page_194 type="pagebreak" title="194"}give order to
the flood of data. It can therefore be assumed that the rampant
collection of personal information will not cease or slow down even if
commercial demands happen to change, for instance to a business model
that is not based on advertising.

[109](#c2-note-0109a){#c2-note-0109}  For a detailed discussion of how
these three levels are recorded, see Felix Stalder and Christine Mayer,
"Der zweite Index: Suchmaschinen, Personalisierung und Überwachung," in
Konrad Becker and Felix Stalder (eds), *Deep Search: Die Politik des
Suchens jenseits von Google* (Innsbruck: Studienverlag, 2009), pp.
112--31.

[110](#c2-note-0110a){#c2-note-0110}  This raises the question of which
drivers should be sent on a detour, so that no traffic jam comes about,
and which should be shown the most direct route, which would now be
traffic-free.

[111](#c2-note-0111a){#c2-note-0111}  Pamela Vaughan, "Demystifying How
Facebook\'s EdgeRank Algorithm Works," *HubSpot* (April 23, 2013),
online.

[112](#c2-note-0112a){#c2-note-0112}  Lisa Gitelman (ed.), *"Raw Data"
Is an Oxymoron* (Cambridge, MA: MIT Press, 2013).

[113](#c2-note-0113a){#c2-note-0113}  The terms "raw," in the sense of
unprocessed, and "cooked," in the sense of processed, derive from the
anthropologist Claude Lévi-Strauss, who introduced them to clarify the
difference between nature and culture. See Claude Lévi-Strauss, *The Raw
and the Cooked*, trans. John Weightman and Doreen Weightman (Chicago,
IL: University of Chicago Press, 1983).

[114](#c2-note-0114a){#c2-note-0114}  Jessica Lee, "No. 1 Position in
Google Gets 33% of Search Traffic," *Search Engine Watch* (June 20,
2013), online.

[115](#c2-note-0115a){#c2-note-0115}  One estimate that continues to be
cited quite often is already obsolete: Michael K. Bergman, "White Paper
-- The Deep Web: Surfacing Hidden Value," *Journal of Electronic
Publishing* 7 (2001), online. The more content is dynamically generated
by databases, the more questionable such estimates become. It is
uncontested, however, that only a small portion of online information is
registered by search engines.

[116](#c2-note-0116a){#c2-note-0116}  Theo Röhle, "Die Demontage der
Gatekeeper: Relationale Perspektiven zur Macht der Suchmaschinen," in
Konrad Becker and Felix Stalder (eds), *Deep Search: Die Politik des
Suchens jenseits von Google* (Innsbruck: Studienverlag, 2009), pp.
133--48.

[117](#c2-note-0117a){#c2-note-0117}  The phenomenon of preparing the
world to be recorded by algorithms is not restricted to digital
networks. As early as 1994 in Germany, for instance, a new sort of
typeface was introduced (the *Fälschungserschwerende Schrift*,
"forgery-impeding typeface") on license plates for the sake of machine
readability and facilitating automatic traffic control. To the human
eye, however, it appears somewhat misshapen and
disproportionate.[]{#Page_195 type="pagebreak" title="195"}

[118](#c2-note-0118a){#c2-note-0118}  [Fairsearch.org](http://Fairsearch.org)
was officially supported by several of Google\'s competitors, including
Microsoft, TripAdvisor, and Oracle.

[119](#c2-note-0119a){#c2-note-0119}  "Antitrust: Commission Sends
Statement of Objections to Google on Comparison Shopping Service,"
*European Commission: Press Release Database* (April 15, 2015), online.

[120](#c2-note-0120a){#c2-note-0120}  Amit Singhal, "An Update to Our
Search Algorithms," *Google Inside Search* (August 10, 2012), online. By
the middle of 2014, according to some sources, Google had received
around 20 million requests to remove links from its index on account of
copyright violations.

[121](#c2-note-0121a){#c2-note-0121}  Alexander Wragge, "Google-Ranking:
Herabstufung ist 'Zensur light'," *iRights.info* (August 23, 2012),
online.

[122](#c2-note-0122a){#c2-note-0122}  Farhad Manjoo,"Amazon\'s Tactics
Confirm Its Critics\' Worst Suspicions," *New York Times: Bits Blog*
(May 23, 2014), online.

[123](#c2-note-0123a){#c2-note-0123}  Lucas D. Introna and Helen
Nissenbaum, "Shaping the Web: Why the Politics of Search Engines
Matters," *Information Society* 16 (2000): 169--85, at 181.

[124](#c2-note-0124a){#c2-note-0124}  Eli Pariser, *The Filter Bubble:
How the New Personalized Web Is Changing What We Read and How We Think*
(New York: Penguin, 2012).

[125](#c2-note-0125a){#c2-note-0125}  Antoinette Rouvroy, "The End(s) of
Critique: Data-Behaviourism vs. Due-Process," in Katja de Vries and
Mireille Hilde­brandt (eds), *Privacy, Due Process and the Computational
Turn: The Philosophy of Law Meets the Philosophy of Technology* (New
York: Routledge, 2013), pp. 143--65.

[126](#c2-note-0126a){#c2-note-0126}  See B. F. Skinner, *Science and
Human Behavior* (New York: The Free Press, 1953), p. 35: "We undertake
to predict and control the behavior of the individual organism. This is
our 'dependent variable' -- the effect for which we are to find the
cause. Our 'independent variables' -- the causes of behavior -- are the
external conditions of which behavior is a function."

[127](#c2-note-0127a){#c2-note-0127}  Nathan Jurgenson, "View from
Nowhere: On the Cultural Ideology of Big Data," *New Inquiry* (October
9, 2014), online.

[128](#c2-note-0128a){#c2-note-0128}  danah boyd and Kate Crawford,
"Critical Questions for Big Data: Provocations for a Cultural,
Technological and Scholarly Phenomenon," *Information, Communication &
Society* 15 (2012): 662--79.
:::
:::

[III]{.chapterNumber} [Politics]{.chapterTitle} {#c3}

::: {.section}
Referentiality, communality, and algorithmicity have become the
characteristic forms of the digital condition because more and more
people -- in more and more segments of life and by means of increasingly
complex technologies -- are actively (or compulsorily) participating in
the negotiation of social meaning. They are thus reacting to the demands
of a chaotic, overwhelming sphere of information and thereby
contributing to its greater expansion. It is the ubiquity of these forms
that makes it possible to speak of the digital condition in the
singular. The goals pursued in these cultural forms, however, are as
diverse, contradictory, and conflicted as society itself. It would
therefore be equally false to assume uniformity or an absence of
alternatives in the unfolding of social and political developments. On
the contrary, the idea of a lack of alternatives is an ideological
assertion that is itself part of a specific political agenda.

In order to resolve this ostensible contradiction between developments
that take place in a manner that is uniform and beyond influence and
those that are characterized by the variable and open-ended
implementation of diverse interests, it is necessary to differentiate
between two levels. One possibility for doing so is presented by Marxist
political economy. It distinguishes between *productive forces*, which
are defined as the technical infrastructure, the state of knowledge, and
the []{#Page_125 type="pagebreak" title="125"}organization of labor, and
the *relations of production*, which are defined as the institutions,
laws, and practices in which people are able to realize the
techno-cultural possibilities of their time. Both are related to one
another, though each develops with a certain degree of autonomy. The
relation between them is essential for the development of society. The
closer they correspond to one another, the more smoothly this
development will run its course; the more contradictions happen to exist
between them, the more this course will suffer from unrest and
conflicts. One of many examples of a current contradiction between these
two levels is the development that has occurred in the area of cultural
works. Whereas radical changes have taken place in their production,
processing, and reproduction (that is, on the level of productive
forces), copyright law (that is, the level of the relations of
production) has remained almost unchanged. In Marxist theory, such
contradictions are interpreted as a starting point for political
upheavals, indeed as a precondition for revolution. As Marx wrote:

::: {.extract}
At a certain stage of development, the material productive forces of
society come into conflict with the existing relations of production or
-- this merely expresses the same thing in legal terms -- with the
property relations within the framework of which they have operated
hitherto. From forms of development of the productive forces these
relations turn into their fetters. Then begins an era of social
revolution.[^1^](#c3-note-0001){#c3-note-0001a}
:::

Many theories aiming to overcome capitalism proceed on the basis of this
dynamic.[^2^](#c3-note-0002){#c3-note-0002a} The distinction between
productive forces and the relations of production, however, is not
unproblematic. On the one hand, no one has managed to formulate an
entirely convincing theory concerning the reciprocal relation between
the two. What does it mean, exactly, that they are related to one
another and yet are simultaneously autonomous? When does the moment
arrive in which they come into conflict with one another? And what,
exactly, happens then? For the most part, these are unsolved questions.
On the other hand, because of the blending of work and leisure already
mentioned, as well as the general economization of social activity (as
is happening on social []{#Page_126 type="pagebreak" title="126"}mass
media and in the creative economy, for instance), it is hardly possible
now to draw a line between production and reproduction. Thus, this set
of concepts, which is strictly oriented toward economic production
alone, is more problematic than ever. My decision to use these concepts
is therefore limited to clarifying the conceptual transition from the
previous chapter to the chapter at hand. The concern of the last chapter
was to explain the forms that cultural processes have adopted under the
present conditions -- ubiquitous telecommunication, general expressivity
(referentiality), flexible cooperation (communality), and informational
automation (algorithmicity). In what follows, on the contrary, my focus
will turn to the political dynamics that have emerged from the
realization of "productive forces" as concrete "relations of production"
or, in more general terms, as social relations. Without claiming to be
comprehensive, I have assigned the confusing and conflicting
multiplicity of actors, projects, and institutions to two large
political developments: post-democracy and commons. The former is moving
toward an essentially authoritarian society, while the latter is moving
toward a radical renewal of democracy by broadening the scope of
collective decision-making. Both cases involve more than just a few
minor changes to the existing order. Rather, both are ultimately leading
to a new political constellation beyond liberal representative
democracy.
:::

::: {.section}
Post-democracy {#c3-sec-0002}
--------------

The current dominant political development is the spread and
entrenchment of post-democracy. The term was coined in the middle of the
1990s by Jacques Rancière. "Post-democracy," as he defined it, "is the
government practice and conceptual legitimization of a democracy *after*
the demos, a democracy that has eliminated the appearance, miscount and
dispute of the people."[^3^](#c3-note-0003){#c3-note-0003a} Rancière
argued that the immediate presence of the people (the demos) has been
abolished and replaced by processes of simulation and modeling such as
opinion polls, focus groups, and plans for various scenarios -- all
guided by technocrats. Thus, he believed that the character of political
processes has changed, namely from disputes about how we []{#Page_127
type="pagebreak" title="127"}ought to face a principally open future to
the administration of predefined necessities and fixed constellations.
As early as the 1980s, Margaret Thatcher justified her radical reforms
with the expression "There is no alternative!" Today, this form of
argumentation remains part of the core vocabulary of post-democratic
politics. Even Angela Merkel is happy to call her political program
*alternativlos* ("without alternatives"). According to Rancière, this
attitude is representative of a government practice that operates
without the unpredictable presence of the people and their dissent
concerning fundamental questions. All that remains is "police logic," in
which everything is already determined, counted, and managed.

Ten years after Rancière\'s ruminations, Colin Crouch revisited the
concept and defined it anew. His notion of post-democracy is as follows:

::: {.extract}
Under this model, while elections certainly exist and can change
governments, public electoral debate is a tightly controlled spectacle,
managed by rival teams of professionals expert in the technique of
persuasion, and considering a small range of issues selected by those
teams. The mass of citizens plays a passive, quiescent, even apathetic
part, responding only to the signals given them. Behind this spectacle
of the electoral game, politics is really shaped in private by
interaction between elected governments and elites that overwhelmingly
represent business interests.[^4^](#c3-note-0004){#c3-note-0004a}
:::

He goes on:

::: {.extract}
My central contentions are that, while the forms of democracy remain
fully in place and today in some respects are actually strengthened --
politics and government are increasingly slipping back into the control
of privileged elites in the manner characteristic of predemocratic
times; and that one major consequence of this process is the growing
impotence of egalitarian causes.[^5^](#c3-note-0005){#c3-note-0005a}
:::

In his analysis, Crouch focused on the Western political system in the
strict sense -- parties, parliaments, governments, eligible voters --
and in particular on the British system under Tony Blair. He described
the development of representative democracy as a rising and declining
curve, and he diagnosed []{#Page_128 type="pagebreak" title="128"}not
only an erosion of democratic institutions but also a shift in the
legitimation of public activity. In this regard, according to Crouch,
the participation of citizens in political decision-making (input
legitimation) has become far less important than the quality of the
achievements that are produced for the citizens (output legitimation).
Out of democracy -- the "dispute of the people," in Rancière\'s sense --
emerges governance. As Crouch maintains, however, this shift was
accompanied by a sustained weakening of public institutions, because it
was simultaneously postulated that private actors are fundamentally more
efficient than the state. This argument was used (and continues to be
used) to justify taking an increasing number of services away from
public actors and entrusting them instead to the private sphere, which
has accordingly become more influential and powerful. One consequence of
this has been, according to Crouch, "the collapse of self-confidence on
the part of the state and the meaning of public authority and public
service."[^6^](#c3-note-0006){#c3-note-0006a} Ultimately, the threat at
hand is the abolishment of democratic institutions in the name of
efficiency. These institutions are then replaced by technocratic
governments without a democratic mandate, as has already happened in
Greece, Portugal, or Ireland, where external overseers have been
directly or indirectly determining the political situation.

::: {.section}
### Social mass media as an everyday aspect of post-democratic life {#c3-sec-0003}

For my purposes, it is of little interest whether the concept of "public
authority" really ought to be revived or whether and in what
circumstances the parable of rising and declining will help us to
understand the development of liberal
democracy.[^7^](#c3-note-0007){#c3-note-0007a} Rather, it is necessary
to supplement Crouch\'s approach in order to make it fruitful for our
understanding of the digital condition, which extends greatly beyond
democratic processes in the classical sense -- that is, with
far-reaching decisions about issues concerning society in a formalized
and binding manner that is legitimized by citizen participation. I will
therefore designate as "post-democratic" all of those developments --
wherever they are taking place -- that, although admittedly preserving
or even providing new []{#Page_129 type="pagebreak"
title="129"}possibilities for participation, simultaneously also
strengthen the capacity for decision-making on levels that preclude
co-determination. This has brought about a lasting separation between
social participation and the institutional exertion of power. These
developments, the everyday instances of which may often be harmless and
banal, create as a whole the cultural preconditions and experiences that
make post-democracy -- both in Crouch\'s strict sense and the broader
sense of Rancière -- seem normal and acceptable.

In an almost ideal-typical form, the developments in question can be
traced alongside the rise of commercially driven social mass media.
Their shape, however, is not a matter of destiny (it is not the result
of any technological imperative) but rather the consequence of a
specific political, economic, and technical constellation that realized
the possibilities of the present (productive forces) in particular
institutional forms (relations of production) and was driven to do so in
the interest of maximizing profit and control. A brief look at the
history of digital communication will be enough to clarify this. In the
middle of the 1990s, the architecture of the internet was largely
decentralized and based on open protocols. The attempts of America
Online (AOL) and CompuServe to run a closed network (an intranet, as we
would call it today) to compete with the open internet were
unsuccessful. The large providers never really managed to address the
need or desire of users to become active producers of meaning. Even the
most popular elements of these closed worlds -- the forums in which
users could interact relatively directly with one another -- lacked the
diversity and multiplicity of participatory options that made the open
internet so attractive.

One of the most popular and radical services on the open internet was
email. The special thing about it was that electronic messages could be
used both for private (one-to-one) and for communal (many-to-many)
communication of all sorts, and thus it helped to merge the previously
distinct domains of the private and the communal. By the middle of the
1980s, and with the help of specialized software, it was possible to
create email lists with which one could send messages efficiently and
reliably to small and large groups. Users could join these groups
without much effort. From the beginning, email has played a significant
role in the creation []{#Page_130 type="pagebreak" title="130"}of
communal formations. Email was one of the first technologies that
enabled the horizontal coordination of large and dispersed groups, and
it was often used to that end. Linus Torvalds\'s famous call for people
to collaborate with him on his operating system -- which was then "just
a hobby" but today, as Linux, makes up part of the infrastructure of the
internet -- was issued on August 25, 1991, via email (and news groups).

One of the most important features of email was due to the service being
integrated into an infrastructure that was decentralized by means of
open protocols. And so it has remained. The fundamental Simple Mail
Transfer Protocol (SMTP), which is still being used, is based on a
so-called Request for Comments (RFC) from 1982. In this document, which
sketched out the new protocol and made it open to discussion, it was
established from the outset that communication should be enabled between
independent networks.[^8^](#c3-note-0008){#c3-note-0008a} On the basis
of this standard, it is thus possible today for different providers to
create an integrated space for communication. Even though they are in
competition with one another, they nevertheless cooperate on the level
of the technical protocol and allow users to send information back and
forth regardless of which providers are used. A choice to switch
providers would not cause the forfeiting of individuals\' address books
or any data. Those who put convenience first can use one of the large
commercial providers, or they can choose one of the many small
commercial or non-commercial services that specialize in certain niches.
It is even possible to set up one\'s own server in order to control this
piece of infrastructure independently. In short, thanks to the
competition between providers or because they themselves command the
necessary technical know-how, users continue to have the opportunity to
influence the infrastructure directly and thus to co-determine the
essential (technical) parameters that allow for specific courses of
action. Admittedly, modern email services are set up in such a way that
most of their users remain on the surface, while the essential decisions
about how they are able to act are made on the "back side"; that is, in
the program code, in databases, and in configuration files. Yet these
two levels are not structurally (that is, organizationally and
technically) separated from one another. Whoever is willing and ready to
[]{#Page_131 type="pagebreak" title="131"}appropriate the corresponding
and freely available technical knowledge can shift back and forth
between them. Before the internet was made suitable for the masses, it
had been necessary to possess such knowledge in order to use the often
complicated and error-prone infrastructure at all.

Over the last 10 to 15 years, these structures have been radically
changed by commercially driven social mass media, which have been
dominated by investors. They began to offer a variety of services in a
user-friendly form and thus enabled the great majority of the population
to make use of complex applications on an everyday basis. This, however,
has gone hand in hand with the centralization of applications and user
information. In the case of email, this happened through the
introduction of Webmail, which always stores every individual message on
the provider\'s computer, where they can be read and composed via web
browsers.[^9^](#c3-note-0009){#c3-note-0009a} From that point on,
providers have been able to follow everything that users write in their
emails. Thanks to nearly comprehensive internet connectivity, Webmail is
very widespread today, and the large providers -- above all Google,
whose Gmail service had more than 500 million users in 2014 -- dominate
the market. The gap has thus widened between user interfaces and the
processes that take place behind them on servers and in data centers,
and this has expanded what Crouch referred to as "the influence of the
privileged elite." In this case, the elite are the engineers and
managers employed by the large providers, and everyone else with access
to the underbelly of the infrastructure, including the British
Government Communications Headquarters (GCHQ) and the US National
Security Agency (NSA), both of which employ programs such as a MUSCULAR
to record data transfers between the computer centers operated by large
American providers.[^10^](#c3-note-0010){#c3-note-0010a}

Nevertheless, email essentially remains an open application, for the
SMTP protocol forces even the largest providers to cooperate. Small
providers are able to collaborate with the latter and establish new
services with them. And this creates options. Since Edward Snowden\'s
revelations, most people are aware that all of their online activities
are being monitored, and this has spurred new interest in secure email
services. In the meantime, there has been a whole series of projects
aimed at combining simple usability with complex []{#Page_132
type="pagebreak" title="132"}encryption in order to strengthen the
privacy of normal users. This same goal has led to a number of
successful crowd-funding campaigns, which indicates that both the
interest and the resources are available to accomplish
it.[^11^](#c3-note-0011){#c3-note-0011a} For users, however, these
offers are only attractive if they are able to switch providers without
great effort. Moreover, such new competition has motivated established
providers to modify their own
infrastructure.[^12^](#c3-note-0012){#c3-note-0012a} In the case of
email, the level on which new user options are created is still
relatively closely linked to that on which generally binding decisions
are made and implemented. In this sense, email is not a post-democratic
technology.
:::

::: {.section}
### Centralization and the power of networks {#c3-sec-0004}

Things are entirely different in the case of new social mass media such
as Facebook, Twitter, LinkedIn, WhatsApp, or most of the other
commercial services that were developed after the year 2000. Almost all
of them are based on standards that are closed and controlled by the
network oper­ators, and these standards prevent users from communicating
beyond the boundaries defined by the providers. Through Facebook, it is
only possible to be in touch with other users of the platform, and
whoever leaves the platform will have to give up all of his or her
Facebook friends.

As with email, these services also rely on people producing their own
content. By now, Facebook has more than a billion users, and each of
them has produced at least a rudimentary personal profile and a few
likes. Thanks to networking opportunities, which make up the most
important service offered by all of these providers, communal formations
can be created with ease. Every day, groups are formed that organize
information, knowledge, and resources in order to establish self-defined
practices (both online and offline). The immense amounts of data,
information, and cultural references generated by this are pre-sorted by
algorithms that operate in the background to ensure that users never
lose their orientation.[^13^](#c3-note-0013){#c3-note-0013a} Viewed from
the perspective of output legitimation -- that is, in terms of what
opportunities these services provide and at what cost -- such offers are
extremely attractive. Examined from the perspective of input
legitimation -- that is, in terms []{#Page_133 type="pagebreak"
title="133"}of how essential decisions are made -- things look rather
different. By means of technical, organizational, and legal standards,
Facebook and other operators of commercially driven social mass media
have created structures in which the level of user interaction is
completely separated from the level on which essential decisions are
made that concern the community of users. Users have no way to influence
the design or development of the conditions under which they (have to)
act. At best, it remains possible to choose one aspect or another from a
predetermined offer; that is, to use certain options or not. Take it or
leave it. As to which options and features are available, users can
neither determine this nor have any direct influence over the matter. In
short, commercial social networks have institutionalized a power
imbalance between those engaged with the user interface and those who
operate the services behind the scenes. The possibility of users to
organize themselves and exert influence -- over the way their data are
treated, for instance -- is severely limited.

One (nominal) exception to this happened to be Facebook itself. From
2009 to 2012, the company allowed users to vote about any proposed
changes to its terms and conditions, which attracted more than 7,000
comments. If 30 percent of all registered members participated, then the
result would be binding. In practice, however, this rule did not have
any consequences, for the quorum was never achieved. This is no
surprise, because Facebook did not make any effort to increase
participation. In fact, the opposite was true. As the privacy activist
Max Schrems has noted, without mincing words, "After grand promises of
user participation, the ballot box was then hidden away for
safekeeping."[^14^](#c3-note-0014){#c3-note-0014a} With reference to the
apparent lack of interest on the part of its users, Facebook did away
with the possibility to vote and replaced it with the option of
directing questions to management.[^15^](#c3-note-0015){#c3-note-0015a}
Since then, and even in the case of fundamental decisions that concern
everyone involved, there has been no way for users to participate in the
discussion. This new procedure, which was used to implement a
comprehensive change in Facebook\'s privacy policy, was described by the
company\'s founder Mark Zuckerberg as follows: "We decided that these
would be the social norms now, and we just went for
it."[^16^](#c3-note-0016){#c3-note-0016a} It is not exactly clear whom
he meant by "we." What is clear, []{#Page_134 type="pagebreak"
title="134"}however, is that the number of people involved with
decision-making is minute in comparison with the number of people
affected by the decisions to be made.

It should come as no surprise that, with the introduction of every new
feature, providers such as Facebook have further tilted the balance of
power between users and operators. With every new version and with every
new update, the possibilities of interaction are changed in such a way
that, within closed networks, more data can be produced in a more
uniform format. Thus, it becomes easier to make connections between
them, which is their only real source of value. Facebook\'s compulsory
"real-name" policy, for instance, which no longer permits users to
register under a pseudonym, makes it easier for the company to create
comprehensive user profiles. Another standard allows the companies to
assemble, in the background, a uniform profile out of the activities of
users on sites or applications that seem at first to have nothing to do
with one another.[^17^](#c3-note-0017){#c3-note-0017a} Google, for
instance, connects user data from its search function with information
from YouTube and other online services, but also with data from Nest, a
networked thermostat. Facebook connects data from its social network
with those from WhatsApp, Instagram, and the virtual-reality service
Oculus.[^18^](#c3-note-0018){#c3-note-0018a} This trend is far from
over. Many services are offering more and more new functions for
generating data, and entire new areas of recording data are being
developed (think, for instance, of Google\'s self-driving car). Yet
users have access to just a minuscule portion of the data that they
themselves have generated and with which they are being described. This
information is fully available to the programmers and analysts alone.
All of this is done -- as the sanctimonious argument goes -- in the name
of data protection.
:::

::: {.section}
### Selling, predicting, modifying {#c3-sec-0005}

Unequal access to information has resulted in an imbalance of power, for
the evaluation of data opens up new possibilities for action. Such data
can be used, first, to earn revenue from personalized advertisements;
second, to predict user behavior with greater accuracy; and third, to
adjust the parameters of interaction in such a way that preferred
patterns of []{#Page_135 type="pagebreak" title="135"}behavior become
more likely. Almost all commercially driven social mass media are
financed by advertising. In 2014, Facebook, Google, and Twitter earned
90 percent of their revenue through such means. It is thus important for
these companies to learn as much as possible about their users in order
to optimize access to them and sell this access to
advertisers.[^19^](#c3-note-0019){#c3-note-0019a} Google and Facebook
justify the price for advertising on their sites by claiming that they
are able to direct the messages of advertisers precisely to those people
who would be most susceptible to them.

Detailed knowledge about users, moreover, also provides new
possibilities for predicting human
behavior.[^20^](#c3-note-0020){#c3-note-0020a} In 2014, Facebook made
headlines by claiming that it could predict a future romantic
relationship between two of its members, and even that it could do so
about a hundred days before the new couple changed their profile status
to "in a relationship." The basis of this sort of prognosis is the
changing frequency with which two people exchange messages over the
social network. In this regard, it does not matter whether these
messages are private (that is, only for the two of them), semi-public
(only for friends), or public (visible to
everyone).[^21^](#c3-note-0021){#c3-note-0021a} Facebook and other
social mass media are set up in such a way that those who control the
servers are always able to see everything. All of this information,
moreover, is formatted in such a way as to optimize its statistical
analysis. As the amounts of data increase, even the smallest changes in
frequencies and correlations begin to gain significance. In its study of
romantic relationships, for instance, Facebook discovered that the
number of online interactions reaches its peak 12 days before a
relationship begins and hits its low point 85 days after the status
update (probably because of an increasing number of offline
interactions).[^22^](#c3-note-0022){#c3-note-0022a} The difference in
the frequency of online interactions between the high point and the low
point was just 0.14 updates per day. In other words, Facebook\'s
statisticians could recognize and evaluate when users would post, over
the course of seven days, one more message than they might usually
exchange. With trad­itional methods of surveillance, which focus on
individual people, such a small deviation would not have been detected.
To do so, it is necessary to have immense numbers of users generating
immense volumes of data. Accordingly, these new []{#Page_136
type="pagebreak" title="136"}analytic possibilities do not mean that
Facebook can accur­ately predict the behavior of a single user. The
unique person remains difficult to calculate, for all that could be
ascertained from this information would be a minimally different
probability of future behavior. As regards a single person, this gain in
knowledge would not be especially useful, for a slight change in
probability has no predictive power on a case-by-case basis. If, in the
case of a unique person, the probability of a particular future action
climbs from, say, 30 to 31 percent, then not much is gained with respect
to predicting this one person\'s behavior. If vast numbers of similar
people are taken into account, however, then the power of prediction
increases enormously. If, in the case of 1 million people, the
probability of a future action increases by 1 percent, this means that,
in the future, around 10,000 more people will act in a certain way.
Although it may be impossible to say for sure which member of a "group"
this might be, this is not relevant to the value of the prediction (to
an advertising agency, for instance).

It is also possible to influence large groups by changing the parameters
of their informational environment. Many online news portals, for
instance, simultaneously test multiple headlines during the first
minutes after the publication of an article (that is, different groups
are shown different titles for the same article). These so-called A/B
tests are used to measure which headlines attract the most clicks. The
most successful headline is then adopted and shown to larger
groups.[^23^](#c3-note-0023){#c3-note-0023a} This, however, is just the
beginning. All services are constantly changing their features for
select focus groups without any notification, and this is happening both
on the level of the user interface and on that of their hidden
infrastructure. In this way, reactions can be tested in order to
determine whether a given change should be implemented more broadly or
rejected. If these experiments and interventions are undertaken with
commercial intentions -- to improve the placement of advertisements, for
instance -- then they hardly trigger any special reactions. Users will
grumble when their customary pro­cedures are changed, but this is
usually a matter of short-term irritation, for users know that they can
hardly do anything about it beyond expressing their discontent. A
greater stir was caused by an experiment conducted in the middle of
2014, []{#Page_137 type="pagebreak" title="137"}for which Facebook
manipulated the timelines of 689,003 of its users, approximately 0.04
percent of all members. The selected members were divided into two
groups, one of which received more "positive" messages from their circle
of friends while the other received more "negative" messages. For a
control group, the filter settings were left unchanged. The goal was to
investigate whether, without any direct interaction and non-verbal cues
(mimicry, for example), the mood of a user could be influenced by the
mood that he or she perceives in others -- that is, whether so-called
"emotional contagion," which had hitherto only been demonstrated in the
case of small and physically present groups, also took place online. The
answer, according to the results of the study, was a resounding
"yes."[^24^](#c3-note-0024){#c3-note-0024a} Another conclusion, though
one that the researchers left unexpressed, is that Facebook can
influence this process in a controlled manner. Here, it is of little
interest whether it is genuinely possible to manipulate the emotional
condition of someone posting on Facebook by increasing the presence of
certain key words, or whether the presence of these words simply
increases the social pressure for someone to appear in a better or worse
mood.[^25^](#c3-note-0025){#c3-note-0025a} What is striking is rather
the complete disregard of one of the basic ethical principles of
scientific research, namely that human subjects must be informed about
and agree to any experiments performed on or with them ("informed
consent"). This disregard was not a mere oversight; the authors of the
study were alerted to the issue before publication, and the methods were
subjected to an internal review. The result: Facebook\'s terms of use
allow such methods, no legal claims could be made, and the modulation of
the newsfeed by changing filter settings is so common that no one at
Facebook could see anything especially wrong with the
experiment.[^26^](#c3-note-0026){#c3-note-0026a}

Why would they? All commercially driven social mass media conduct
manipulative experiments. From the perspective of "data behaviorism,"
this is the best way to acquire feedback from users -- far better than
direct surveys.[^27^](#c3-note-0027){#c3-note-0027a} Facebook had also
already conducted experiments in order to intervene directly in
political processes. On November 2, 2010, the social mass medium tested,
by manipulating timelines, whether it might be possible to increase
voter turnout for the American midterm elections that were taking place
[]{#Page_138 type="pagebreak" title="138"}on that day. An application
was surreptitiously loaded into the timelines of more than 10 million
people that contained polling information and a list of friends who had
already voted. It was possible to collect this data because the
application had a built-in function that enabled people to indicate
whether they had already cast a vote. A control group received a message
that encouraged them to vote but lacked any personalization or the
possibility of social interaction. This experiment, too, relied on the
principle of "contagion." By the end of the day, those who saw that
their friends had already voted were 0.39 percent more likely to go to
the polls than those in the control group. In relation to a single
person, the extent of this influence was thus extremely weak and barely
relevant. Indeed, it would be laughable even to speak of influence at
all if only 250 people had altered their behavior. Personal experience
suggests that one cannot be manipulated by such things. It would be
false to conclude, however, that such interventions are irrelevant, for
matters are entirely different where large groups are concerned. On
account of Facebook\'s small experiment, approximately 60,000 people
voted who otherwise would have stayed at home, and around 340,000 extra
votes were cast (because most people do not go to vote alone but rather
bring along friends and family members, who vote at the same
time).[^28^](#c3-note-0028){#c3-note-0028a} These are relevant numbers
if the margins are narrow between the competing parties or candidates,
especially if the people who receive the extra information and incentive
are not -- as they were for this study -- chosen at
random.[^29^](#c3-note-0029){#c3-note-0029a} Facebook already possesses,
in excess, the knowledge necessary to focus on a particular target
group, for instance on people whose sympathies lie with one party or
another.[^30^](#c3-note-0030){#c3-note-0030a}
:::

::: {.section}
### The dark shadow of cybernetics {#c3-sec-0006}

Far from being unusual, the manipulation of information behind the backs
of users is rather something that is done every day by commercially
driven social mass media, which are not primarily channels for
transmitting content but rather -- and above all -- environments in
which we live. Both of the examples discussed above illustrate what is
possible when these environments, which do not represent the world but
[]{#Page_139 type="pagebreak" title="139"}rather generate it, are
centrally controlled, as is presently the case. Power is being exercised
not by directly stipulating what each individual ought to do, but rather
by altering the environment in which everyone is responsible for finding
his or her way. The baseline of facts can be slightly skewed in order to
increase the probability that this modified fac­ticity will, as a sort
of social gravity, guide things in a certain direction. At work here is
the fundamental insight of cybernetics, namely that the "target" to be
met -- be it an enemy bomber,[^31^](#c3-note-0031){#c3-note-0031a} a
citizen, or a customer -- orients its behavior to its environment, to
which it is linked via feedback. From this observation, cybernetically
oriented social planners soon drew the conclusion that the best (because
indirect and hardly perceptible) method for influencing the "target"
would be to alter its environment. As early as the beginning of the
1940s, the anthropologist and cyberneticist Gregory Bateson posed the
following question: "How would we rig the maze or problem-box so that
the anthropomorphic rat shall obtain a repeated and reinforced
impression of his own free will?"[^32^](#c3-note-0032){#c3-note-0032a}
Though Bateson\'s formulation is somewhat flippant, there was a serious
backdrop to this problem. The electoral success of the Nazis during the
1930s seemed to have indicated that the free expression of will can have
catastrophic political consequences. In response to this, the American
planners of the post-war order made it their objective to steer the
population toward (or keep it on) the path of liberal, market-oriented
democracy without obviously undermining the legitimacy of liberal
democracy itself, namely its basis in the individual\'s free will and
freedom of choice. According to the French author collective Tiqqun,
this paradox was resolved by the introduction of "a new fable that,
after the Second World War, definitively \[...\] supplanted the liberal
hypothesis. Contrary to the latter, it proposes to conceive biological,
physical and social behaviors as something integrally programmed and
re-programmable."[^33^](#c3-note-0033){#c3-note-0033a} By the term
"liberal hypothesis," Tiqqun meant the assumption, stemming from the
time of the Enlightenment, that people could improve themselves by
applying their own reason and exercising their own moral faculties, and
could free themselves from ignorance through education and reflection.
Thus, they could become autonomous individuals and operate as free
actors (both as market []{#Page_140 type="pagebreak"
title="140"}participants and as citizens). The liberal hypothesis is
based on human understanding. The cybernetic hypothesis is not. Its
conception of humans is analogous to its conception of animals, plants,
and machines; like the latter, people are organisms that react to
stimuli from their environment. The hypothesis is thus associated with
the theories of "instrumental conditioning," which had been formulated
by behaviorists during the 1940s. In the case of both humans and other
animals, as it was argued, learning is not a process of understanding
but rather one of executing a pattern of stimulus and response. To learn
is thus to adopt a pattern of behavior with which one\'s own activity
elicits the desired reaction. In this model, understanding does not play
any role; all that matters is
behavior.[^34^](#c3-note-0034){#c3-note-0034a}

And this behavior, according the cybernetic hypothesis, can be
programmed not by directly accessing people (who are conceived as
impenetrable black boxes) but rather by indirectly altering the
environment, with which organisms and machines are linked via feedback.
These interventions are usually so subtle as to not be perceived by the
individual, and this is because there is no baseline against which it is
possible to measure the extent to which the "baseline of facts" has been
tilted. Search results and timelines are always being filtered and,
owing to personalization, a search will hardly ever generate the same
results twice. On a case-by-case basis, the effects of this are often
minimal for the individual. In aggregate and over long periods of time,
however, the effects can be substantial without the individual even
being able to detect them. Yet the practice of controlling behavior by
manipulating the environment is not limited to the environment of
information. In their enormously influential book from 2008, *Nudge*,
Richard Thaler and Cass Sunstein even recommended this as a general
method for "nudging" people, almost without their notice, in the
direction desired by central planners. To accomplish this, it is
necessary for the environment to be redesigned by the "choice architect"
-- by someone, for instance, who can organize the groceries in a store
in such a way as to increase the probability that shoppers will reach
for healthier options. They refer to this system of control as
"libertarian paternalism" because it combines freedom of choice
(libertarianism) with obedience []{#Page_141 type="pagebreak"
title="141"}to an -- albeit invisible -- authority figure
(paternalism).[^35^](#c3-note-0035){#c3-note-0035a} The ideal sought by
the authors is a sort of unintrusive caretaking. In the spirit of
cybernetics and in line with the structures of post-democracy, the
expectation is for people to be moved in the experts\' chosen direction
by means of a change to their environment, while simultaneously
maintaining the impression that they are behaving in a free and
autonomous manner. The compatibility of this approach with agendas on
both sides of the political spectrum is evident in the fact that the
Democratic president Barack Obama regularly sought Cass Sunstein\'s
advice and, in 2009, made him the director of the Office of Information
and Regulatory Affairs, while Richard Thaler, in 2010, was appointed to
the advisory board of the so-called Behavioural Insights Team, which,
known as the "nudge unit," had been founded by the Conservative prime
minister David Cameron.

In the case of social mass media, the ability to manipulate the
environment is highly one-sided. It is reserved exclusively for those on
the inside, and the latter are concerned with maximizing the profit of a
small group and expanding their power. It is possible to regard this
group as the inner core of the post-democratic system, consisting of
leading figures from business, politics, and the intelligence agencies.
Users typically experience this power, which determines the sphere of
possibility within which their everyday activity can take place, in its
soft form, for instance when new features are introduced that change the
information environment. The hard form of this power only becomes
apparent in extreme cases, for instance when a profile is suddenly
deleted or a group is removed. This can happen on account of a rule
whose existence does not necessarily have to be public or
transparent,[^36^](#c3-note-0036){#c3-note-0036a} or because of an
external intervention that will only be communicated if it is in the
providers\' interest to do so. Such cases make it clear that, at any
time, service providers can take away the possibilities for action that
they offer. This results in a paradoxical experience on the part of
users: the very environments that open up new opportunities for them in
their personal lives prove to be entirely beyond influence when it comes
to fundamental decisions that affect everyone. And, as the majority of
people gradually lose the ability to co-determine how the "big
questions" are answered, a very []{#Page_142 type="pagebreak"
title="142"}small number of actors is becoming stronger than ever. This
paradox of new opportunities for action and simultaneous powerlessness
has been reflected in public debate, where there has also been much
(one-sided) talk about empowerment and the loss of
control.[^37^](#c3-note-0037){#c3-note-0037a} It would be better to
discuss a shift in power that has benefited the elite at the expense of
the vast majority of people.
:::

::: {.section}
### Networks as monopolies {#c3-sec-0007}

Whereas the dominance of output legitimation is new in the realm of
politics, it is normal and seldom regarded as problematic in the world
of business.[^38^](#c3-note-0038){#c3-note-0038a} For, at least in
theory (that is, under the conditions of a functioning market),
customers are able to deny the legitimacy of providers and ultimately
choose between competing products. In the case of social mass media,
however, there is hardly any competition, despite all of the innovation
that is allegedly taking place. Facebook, Twitter, and many other
platforms use closed protocols that greatly hinder the ability of their
members to communicate with the users of competing providers. This has
led to a situation in which the so-called *network effect* -- the fact
that the more a network connects people with one another, the more
useful and attractive it becomes -- has given rise to a *monopoly
effect*: the entire network can only consist of a single provider. This
connection between the network effect and the monopoly effect, however,
is not inevitable, but rather fabricated. It is the closed standards
that make it impossible to switch providers without losing access to the
entire network and thus also to the communal formations that were
created on its foundation. From the perspective of the user, this
represents an extremely high barrier against leaving the network -- for,
as discussed above, these formations now play an essential role in the
creation of both identity and opportunities for action. From the user\'s
standpoint, this is an all-or-nothing decision with severe consequences.
Formally, this is still a matter of individual and free choice, for no
one is being forced, in the classical sense, to use a particular
provider.[^39^](#c3-note-0039){#c3-note-0039a} Yet the options for
action are already pre-structured in such a way that free choice is no
longer free. The majority of American teens, for example, despite
[]{#Page_143 type="pagebreak" title="143"}no longer being very
enthusiastic about Facebook, continue using the network for fear of
missing out on something.[^40^](#c3-note-0040){#c3-note-0040a} This
contradiction -- voluntarily doing something that one does not really
want to do -- and the resulting experience of failing to shape one\'s
own activity in a coherent manner are ideal-typical manifestations of
the power of networks.

The problem experienced by the unwilling-willing users of Facebook has
not been caused by the transformation of communication into data as
such. This is necessary to provide input for algorithms, which turn the
flood of information into something usable. To this extent, the general
complaint about the domination of algorithms is off the mark. The
problem is not the algorithms themselves but rather the specific
capitalist and post-democratic setting in which they are implemented.
They only become an instrument of domin­ation when open and
decentralized activities are transferred into closed and centralized
structures in which far-reaching, fundamental decision-making powers and
possibilities for action are embedded that legitimize themselves purely
on the basis of their output. Or, to adapt the title of Rosa von
Praunheim\'s film, which I discussed in my first chapter: it is not the
algorithm that is perverse, but the situation in which it lives.
:::

::: {.section}
### Political surveillance {#c3-sec-0008}

In June 2013, Edward Snowden exposed an additional and especially
problematic aspect of the expansion of post-democratic structures: the
comprehensive surveillance of the internet by government intelligence
agencies. The latter do not use collected data primarily for commercial
ends (although they do engage in commercial espionage) but rather for
political repression and the protection of central power interests --
or, to put it in more neutral terms, in the service of general security.
Yet the NSA and other intelligence agencies also record decentralized
communication and transform it into (meta-)data, which are centrally
stored and analyzed.[^41^](#c3-note-0041){#c3-note-0041a} This process
is used to generate possible courses of action, from intensifying the
surveillance of individuals and manipulating their informational
environment[^42^](#c3-note-0042){#c3-note-0042a} to launching military
drones for the purpose of
assassination.[^43^](#c3-note-0043){#c3-note-0043a} The []{#Page_144
type="pagebreak" title="144"}great advantage of meta-data is that they
can be standardized and thus easily evaluated by machines. This is
especially important for intelligence agencies because, unlike social
mass media, they do not analyze uniformly formatted and easily
processable streams of communication. That said, the boundaries between
post-democratic social mass media and government intelligence services
are fluid. As is well known by now, the two realms share a number of
continuities in personnel and commonalities with respect to their
content.[^44^](#c3-note-0044){#c3-note-0044a} In 2010, for instance,
Facebook\'s chief security officer left his job for a new position at
the NSA. Personnel swapping of this sort takes place at all levels and
is facilitated by the fact that the two sectors are engaged in nearly
the same activity: analyzing social interactions in real time by means
of their exclusive access to immense volumes of data. The lines of
inquiry and the applied methods are so similar that universities,
companies, and security organizations are able to cooperate closely with
one another. In many cases, certain programs or analytic methods are
just as suitable for commercial purposes as they are for intelligence
agencies and branches of the military. This is especially apparent in
the research that is being conducted. Scientists, businesses, and
militaries share a common interest in discovering collective social
dynamics as early as possible, isolating the relevant nodes (machines,
individual people, or groups) through which these dynamics can be
influenced, and developing strategies for specific interventions to
achieve one goal or another. Aspects of this cooperation are publicly
documented. Since 2011, for instance, the Defense Advanced Research
Projects Agency (DARPA) -- the American agency that, in the 1960s,
initiated and financed the development of the internet -- has been
running its own research program on social mass media with the name
Social Media in Strategic Communication. Within the framework of this
program, more than 160 scientific studies have already been published,
with titles such as "Automated Leadership Analysis" or "Interplay
between Social and Topical
Structure."[^45^](#c3-note-0045){#c3-note-0045a} Since 2009, the US
military has been coordinating research in this field through a program
called the Minerva Initiative, which oversees more than 70 individual
projects.[^46^](#c3-note-0046){#c3-note-0046a} Since 2009, too, the
European Union has been working together []{#Page_145 type="pagebreak"
title="145"}with universities and security agencies within the framework
of the so-called INDECT program, the goal of which is "to involve
European scientists and researchers in the development of solutions to
and tools for automatic threat
detection."[^47^](#c3-note-0047){#c3-note-0047a} Research, however, is
just one area of activity. As regards the collection of data and the
surveillance of communication, there is also a high degree of
cooperation between private and government actors, though it is not
always without tension. Snowden\'s revelations have done little to
change this. The public outcry of large internet companies over the fact
that the NSA has been monitoring their services might be an act of
showmanship more than anything else. Such bickering, according to the
security expert Bruce Schneier, is "mostly role-playing designed to keep
us blasé about what\'s really going
on."[^48^](#c3-note-0048){#c3-note-0048a}

Like the operators of social mass media, intelligence agencies also
argue that their methods should be judged according to their output;
that is, the extent to which they ensure state security. Outsiders,
however, are hardly able to make such a judgment. Input legitimation --
that is, the question of whether government security agencies are
operating within the bounds of the democratically legitimized order of
law -- seems to be playing a less significant role in the public
discussion. In somewhat exaggerated terms, one could say that the
disregard for fundamental rights is justified by the quality of the
"security" that these agencies have created. Perhaps the similarity of
the general methods and self-justifications with which service providers
of social production, consumption, and security are constantly
"optimized" is one reason why there has yet to be widespread public
protest against comprehensive surveillance programs. We have been warned
of the establishment of a "police state in reserve," which can be
deployed at any time, but these warnings seem to have fallen on deaf
ears.[^49^](#c3-note-0049){#c3-note-0049a}
:::

::: {.section}
### The normalization of post-democracy {#c3-sec-0009}

At best, it seems as though the reflex of many people is to respond to
even fundamental political issues by considering only what might be
useful or pleasant for themselves in the short term. Apparently, many
people consider it normal to []{#Page_146 type="pagebreak"
title="146"}be excluded from decisions that affect broad and significant
areas of their life. The post-democracy of social mass media, which has
deeply permeated the constitution of everyday life and the constitution
of subjects, is underpinned by the ever advancing post-democracy of
politics. It changes the expectations that citizens have for democratic
institutions, and it makes their increasing erosion seem expected and
normal to broad strata of society. The violation of fundamental and
constitutional civil rights, such as those concerning the protection of
data, is increasingly regarded as unavoidable and -- from the pragmatic
perspective of the individual -- not so bad. This has of course
benefited political decision-makers, who have shown little desire to
change the situation, safeguard basic rights, and establish democratic
control over all areas of executive
authority.[^50^](#c3-note-0050){#c3-note-0050a}

The spread of "smart" technologies is enabling such post-democratic
processes and structures to permeate all areas of life. Within one\'s
private living space, this happens through smart homes, which are still
limited to the high end of the market, and smart meters, which have been
implemented across all social
strata.[^51^](#c3-note-0051){#c3-note-0051a} The latter provide
electricity companies with detailed real-time data about a household\'s
usage behavior and are supposed to enhance energy efficiency, but it
remains unclear exactly how this new efficiency will be
achieved.[^52^](#c3-note-0052){#c3-note-0052a} The concept of the "smart
city" extends this process to entire municipalities. Over the course of
the next few decades, for instance, Siemens predicts that "cities will
have countless autonomous, intelligently functioning IT systems that
will have perfect knowledge of users\' habits and energy consumption,
and provide optimum service. \[...\] The goal of such a city is to
optimally regulate and control resources by means of autonomous IT
systems."[^53^](#c3-note-0053){#c3-note-0053a} According to this vision,
the city will become a cybernetic machine, but if everything is
"optimally" regulated and controlled, who will be left to ask in whose
interests these autonomous systems are operating?

Such dynamics, however, not only reorganize physical space on a small
and a large scale; they also infiltrate human beings. Adherents of the
Quantified Self movement work diligently to record digital information
about their own bodies. The number of platforms that incite users to
stay fit (and []{#Page_147 type="pagebreak" title="147"}share their data
with companies) with competitions, point systems, and similar incentives
has been growing steadily. It is just a small step from this hobby
movement to a disciplinary regime that is targeted at the
body.[^54^](#c3-note-0054){#c3-note-0054a} Imagine the possibilities of
surveillance and sanctioning that will come about when data from
self-optimizing applications are combined with the data available to
insurance companies, hospitals, authorities, or employers. It does not
take too much imagination to do so, because this is already happening in
part today. At the end of 2014, for instance, the Generali Insurance
Company announced a new set of services that is marketed under the name
Vitality. People insured in Germany, France, and Austria are supposed to
send their health information to the company and, as a reward for
leading a "proper" lifestyle, receive a rebate on their premium. The
long-term goal of the program is to develop "behavior-dependent tariff
models," which would undermine the solidarity model of health
insurance.[^55^](#c3-note-0055){#c3-note-0055a}

According to the legal scholar Frank Pasquale, the sum of all these
developments has led to a black-box society: More social processes are
being controlled by algorithms whose operations are not transparent
because they are shielded from the outside world and thus from
democratic control.[^56^](#c3-note-0056){#c3-note-0056a} This
ever-expanding "post-democracy" is not simply liberal democracy with a
few problems that can be eliminated through well-intentioned reforms.
Rather, a new social system has emerged in which allegedly relaxed
control over social activity is compensated for by a heightened level of
control over the data and structural conditions pertaining to the
activity itself. In this system, both the virtual and the physical world
are altered to achieve particular goals -- goals determined by just a
few powerful actors -- without the inclusion of those affected by these
changes and often without them being able to notice the changes at all.
Whoever refuses to share his or her data freely comes to look suspicious
and, regardless of the motivations behind this anonymity, might even be
regarded as a potential enemy. In July 2014, for instance, the following
remarks were included in Facebook\'s terms of use: "On Facebook people
connect using their real names and identities. \[...\] Claiming to be
another person \[...\] or creating multiple accounts undermines
community []{#Page_148 type="pagebreak" title="148"}and violates
Facebook\'s terms."[^57^](#c3-note-0057){#c3-note-0057a} For the police
and the intelligence agencies in particular, all activities that attempt
to evade comprehensive surveillance are generally suspicious. Even in
Germany, people are labeled "extremists" by the NSA for the sole reason
that they have supported the Tor Project\'s anonymity
software.[^58^](#c3-note-0058){#c3-note-0058a} In a 2014 trial in
Vienna, the use of a foreign pre-paid telephone was introduced as
evidence that the defendant had attempted to conceal a crime, even
though this is a harmless and common method for avoiding roaming charges
while abroad.[^59^](#c3-note-0059){#c3-note-0059a} This is a sort of
anti-mask law 2.0, and every additional terrorist attack is used to
justify extending its reach.

It is clear that Zygmunt Bauman\'s bleak assessment of freedom in what
he calls "liquid modernity" -- "freedom comes when it no longer
matters"[^60^](#c3-note-0060){#c3-note-0060a} -- can easily be modified
to suit the digital condition: everyone can participate in cultural
processes, because culture itself has become irrelevant. Disputes about
shared meaning, in which negotiations are made about what is important
to people and what ought to be achieved, have less and less influence
over the way power is exercised. Politics has been abandoned for an
administrative management that oscillates between paternalism and
authoritarianism. Issues that concern the common good have been
delegated to "autonomous IT systems" and removed from public debate. By
now, the exercise of power, which shapes society, is based less on basic
consensus and cultural hegemony than it is on the technocratic argument
that "there is no alternative" and that the (informational) environment
in which people have to orient themselves should be optimized through
comprehensive control and manipulation -- whether they agree with this
or not.
:::

::: {.section}
### Forms of resistance {#c3-sec-0010}

As far as the circumstances outlined above are concerned, Bauman\'s
conclusion may seem justified. But as an overarching assessment of
things, it falls somewhat short, for every form of power provokes its
own forms of resistance.[^61^](#c3-note-0061){#c3-note-0061a} In the
context of post-democracy under the digital condition, these forms have
likewise shifted to the level of data, and an especially innovative and
effective means of resistance []{#Page_149 type="pagebreak"
title="149"}has been the "leak"; that is, the unauthorized publication
of classified documents, usually in the form of large datasets. The most
famous platform for this is WikiLeaks, which since 2006 has attracted
international attention to this method with dozens of spectacular
publications -- on corruption scandals, abuses of authority, corporate
malfeasance, environmental damage, and war crimes. As a form of
resistance, however, leaking entire databases is not limited to just one
platform. In recent years and through a variety of channels, large
amounts of data (from banks and accounting firms, for instance) have
been made public or have been handed over to tax investigators by
insiders. Thus, in 2014, for instance, the *Süddeutsche Zeitung*
(operating as part of the International Consortium of Investigative
Journalists based in Washington, DC), was not only able to analyze the
so-called "Offshore Leaks" -- a database concerning approximately
122,000 shell companies registered in tax
havens[^62^](#c3-note-0062){#c3-note-0062a} -- but also the "Luxembourg
Leaks," which consisted of 28,000 pages of documents demonstrating the
existence of secret and extensive tax deals between national authorities
and multinational corporations and which caused a great deal of
difficulty for Jean-Claude Juncker, the newly elected president of the
European Commission and former prime minister of
Luxembourg.[^63^](#c3-note-0063){#c3-note-0063a}

The reasons why employees or government workers have become increasingly
willing to hand over large amounts of information to journalists or
whistle-blowing platforms are to be sought in the contradictions of the
current post-democratic regime. Over the past few years, the discrepancy
in Western countries between the self-representation of democratic
institutions and their frequently post-democratic practices has become
even more obvious. For some people, including the former CIA employee
Edward Snowden, this discrepancy created a moral conflict. He claimed
that his work consisted in the large-scale investigation and monitoring
of respectable citizens, thus systematically violating the Constitution,
which he was supposed to be protecting. He resolved this inner conflict
by gathering material about his own activity, then releasing it, with
the help of journalists, to the public, so that the latter could
understand and judge what was taking
place.[^64^](#c3-note-0064){#c3-note-0064a} His leaks benefited from
technical []{#Page_150 type="pagebreak" title="150"}advances, including
the new forms of cooperation which have resulted from such advances.
Even institutions that depend on keeping secrets, such as banks and
intelligence agencies, have to "share" their information internally and
rely on a large pool of technical personnel to record and process the
massive amounts of data. To accomplish these tasks, employees need the
fullest possible access to this information, for even the most secret
databases have to be maintained by someone, and this also involves
copying data. Thus, it is far easier today than it was just a few
decades ago to smuggle large volumes of data out of an
institution.[^65^](#c3-note-0065){#c3-note-0065a}

This new form of leaking, however, did not become an important method of
resistance on account of technical developments alone. In the era of big
data, databases are the central resource not only for analyzing how the
world is described by digital communication, but also for generating
that communication. The power of networks in particular is organized
through the construction of environmental conditions that operate
simultaneously in many places. On their own, the individual commands and
instructions are often banal and harmless, but as a whole they
contribute to a dynamic field that is meant to produce the results
desired by the planners who issue them. In order to reconstruct this
process, it is necessary to have access to these large amounts of data.
With such information at hand, it is possible to relocate the
surreptitious operations of post-democracy into the sphere of political
debate -- the public sphere in its emphatic, liberal sense -- and this
needs to be done in order to strengthen democratic forces against their
post-democratic counterparts. Ten years after WikiLeaks and three years
after Edward Snowden\'s revelations, it remains highly questionable
whether democratic actors are strong enough or able to muster the
political will to use this information to tip the balance in their favor
for the long term. Despite the forms of resistance that have arisen in
response to these new challenges, one could be tempted to concur with
Bauman\'s pessimistic conclusion about the irrelevance of freedom,
especially if post-democracy were the only concrete political tendency
of the digital condition. But it is not. There is a second political
trend taking place, though it is not quite as well
developed.[]{#Page_151 type="pagebreak" title="151"}
:::
:::

::: {.section}
Commons {#c3-sec-0011}
-------

The digital condition includes not only post-democratic structures in
more areas of life; it is also characterized by the development of a new
manner of production. As early as 2002, the legal scholar Yochai Benkler
coined the term "commons-based peer production" to describe the
development in question.[^66^](#c3-note-0066){#c3-note-0066a} Together,
Benkler\'s peers form what I have referred to as "communal formations":
people joining forces voluntarily and on a fundamentally even playing
field in order to pursue common goals. Benkler enhances this idea with
reference to the constitutive role of the commons for many of these
communal formations.

As such, commons are neither new nor specifically Western. They exist in
many cultural traditions, and thus the term is used in a wide variety of
ways.[^67^](#c3-note-0067){#c3-note-0067a} In what follows, I will
distinguish between three different dimensions. The first of these
involves "common pool resources"; that is, *goods* that can be used
communally. The second dimension is that these goods are administered by
the "commoners"; that is, by members of *communities* who produce, use,
and cultivate the resources. Third, this activity gives rise to forms of
"commoning"; that is, to *practices*, *norms*, and *institutions* that
are developed by the communities
themselves.[^68^](#c3-note-0068){#c3-note-0068a}

In the commons, efforts are focused on the long-term utility of goods.
This does not mean that commons cannot also be used for the production
of commercial products -- cheese from the milk of cows that graze on a
common pasture, for instance, or books based on the content of Wikipedia
articles. The relationships between the people who use a certain
resource communally, however, are not structured through money but
rather through direct social cooper­ation. Commons are thus
fundamentally different from classical market-oriented institutions,
which orient their activity primarily in response to price signals.
Commons are also fundamentally distinct from bureaucracies -- whether in
the form of public administration or private industry -- which are
organized according to hierarchical chains of command. And they differ,
too, from public institutions. Whereas the latter are concerned with
society as a whole -- or at least that is []{#Page_152 type="pagebreak"
title="152"}their democratic mandate -- commons are inwardly oriented
forms that primarily exist by means and for the sake of their members.

::: {.section}
### The organization of the commons {#c3-sec-0012}

Commoners create institutions when they join together for the sake of
using a resource in a long-term and communal manner. In this, the
separation of producers and consumers, which is otherwise ubiquitous,
does not play a significant role: to different and variable extents, all
commoners are producers and consumers of the common resources. It is an
everyday occurrence for someone to take something from the common pool
of resources for his or her own use, but it is understood that something
will be created from this that, in one form or another, will flow back
into the common pool. This process -- the reciprocal relationship
between singular appropriation and communal provisions -- is one of the
central dynamics within commons.

Because commoners orient their activity neither according to price
signals (markets) nor according to instructions or commands
(hierarchies), social communication among the members is the most
important means of self-organization. This communication is intended to
achieve consensus and the voluntary acceptance of negotiated rules, for
only in such a way is it possible to maintain the voluntary nature of
the arrangement and to keep internal controls at a minimum. Voting,
which is meant to legitimize the preferences of a majority, is thus
somewhat rare, and when it does happen, it is only of subordinate
significance. The main issue is to build consensus, and this is usually
a complex process requiring intensive communication. One of the reasons
why the very old practice of the commons is now being readopted and
widely discussed is because communication-intensive and horizontal
processes can be organized far more effectively with digital
technologies. Thus, the idea of collective participation and
organization beyond small groups is no longer just a utopian vision.

The absence of price signals and chains of command causes the social
institutions of the commons to develop complex structures for
comprehensively integrating their members. []{#Page_153 type="pagebreak"
title="153"}This typically involves weaving together a variety of
economic, social, cultural, and technical dimensions. Commons realize an
alternative to the classical separation of spheres that is so typical of
our modern economy and society. The economy is not understood here as an
independent realm that functions according to a different set of rules
and with externalities, but rather as one facet of a complex and
comprehensive phenomenon with intertwining commercial, social, ethical,
ecological, and cultural dimensions.

It is impossible to determine how the interplay between these three
dimensions generally solidifies into concrete institutions.
Historically, many different commons-based institutions were developed,
and their number and variety have only increased under the digital
condition. Elinor Ostrom, who was awarded the 2009 Nobel Prize in
Economics for her work on the commons, has thus refrained from
formulating a general model for
them.[^69^](#c3-note-0069){#c3-note-0069a} Instead, she has identified a
series of fundamental challenges for which all commoners have to devise
their own solutions.[^70^](#c3-note-0070){#c3-note-0070a} For example,
the membership of a group that communally uses a particular resource
must be defined and, if necessary, limited. Especially in the case of
material resources, such as pastures on which several people keep their
animals, it is important to limit the number of members for the simple
reason that the resource in question might otherwise be over-utilized
(this is allegedly the "tragedy of the
commons").[^71^](#c3-note-0071){#c3-note-0071a} Things are different
with so-called non-rival goods, which can be consumed by one person
without excluding its use by another. When I download and use a freely
available word-processing program, for instance, I do not take away
another person\'s chance to do the same. But even in the case of digital
common goods, access is often tied to certain conditions. Whoever uses
free software has to accept its licensing agreement.

Internally, commons are often meritocratically oriented. Those who
contribute more are also able to make greater use of the common good (in
the case of material goods) or more strongly influence its development
(in the case of informational goods). In the latter case, the
meritocratic element takes into account the fact that the challenge does
not lie in avoiding the over-utilization of a good, but rather in
generating new contributions to its further development. Those who
[]{#Page_154 type="pagebreak" title="154"}contribute most to the
provision of resources should also be able to determine their further
course of development, and this represents an important incentive for
these members to remain in the group. This is in the interest of all
participants, and thus the authority of the most active members is
seldom called into question. This does not mean, however, that there are
no differences of opinion within commons. Here, too, reaching consensus
can be a time-consuming process. Among the most important
characteristics of all commons are thus mechanisms for decision-making
that involve members in a variety of ways. The rules that govern the
commons are established by the members themselves. This goes far beyond
choosing between two options presented by a third party. Commons are not
simply markets without money. All rele­vant decisions are made
collectively within the commons, and they do not simply aggregate as the
sum of individual decisions. Here, unlike the case of post-democratic
structures, the levels of participation and decision-making are not
separ­ated from one another. On the contrary, they are directly and
explicitly connected.

The implementation of rules and norms, even if they are the result of
consensus, is never an entirely smooth process. It is therefore
necessary, as Ostrom has stressed, to monitor rule compliance within
commons and to develop a system of graded sanctions. Minor infractions
are punished with social disapproval or small penalties, while graver
infractions warrant stiffer penalties that can lead to a person\'s
exclusion from the group. In order for conflicts or rule violations not
to escalate in the commons to the extent that expulsion is the only
option, mechanisms for conflict resolution have to be put in place. In
the case of Wikipedia, for instance, conflicts are usually resolved
through discussions. This is not always productive, however, for
occasionally the "solution" turns out to be that one side or the other
has simply given up out of exhaustion.

A final important point is that commons do not exist in isolation from
society. They are always part of larger social systems, which are
normally governed by the principles of the market or subject to state
control, and are thus in many cases oppositional to the practice of
commoning. Political resistance is often incited by the very claim that
a particular []{#Page_155 type="pagebreak" title="155"}good can be
communally administered and does not belong to a single owner, but
rather to a group that governs its own affairs. Yet without the
recognition of the right to self-organization and without the
corresponding legal conditions allowing this right to be perceived as
such, commons are barely able to form at all, and existing commons are
always at risk of being expropriated and privatized by a third party.
This is the true "tragedy of the commons," and it happens all the
time.[^72^](#c3-note-0072){#c3-note-0072a}
:::

::: {.section}
### Informational common goods: free software and free culture {#c3-sec-0013}

The term "commons" was first applied to informational goods during the
second half of the 1990s.[^73^](#c3-note-0073){#c3-note-0073a} The
practice of creating digital common goods, however, goes back to the
origins of free software around the middle of the 1980s. Since then, a
complex landscape has developed, with software codes being cooperatively
and sustainably managed as common resources available to everyone (who
accepts their licensing agreements). This can best be explained with an
example. One of the oldest projects in the area of free software -- and
one that continues to be of relevance today -- is Debian, a so-called
"distribution" (that is, a compilation of software components) that has
existed since 1993. According to its own website:

::: {.extract}
The Debian Project is an association of individuals who have made common
cause to create a free operating system. \[...\] An operating system is
the set of basic programs and utilities that make your computer run.
\[...\] Debian comes with over 43000 packages (precompiled software that
is bundled up in a nice format for easy installation on your machine).
\[...\] All of it free.[^74^](#c3-note-0074){#c3-note-0074a}
:::

The special thing about Unix-like operating systems is that they are
composed of a very large number of independent yet interacting programs.
The task of a distribution -- and this task is hardly trivial -- is to
combine this modular variety into a whole that provides, in an
integrated manner, all of the functions of a contemporary computer.
Debian is particularly []{#Page_156 type="pagebreak"
title="156"}important because the community sets extremely high
standards for itself, and it is for this reason that the distribution is
not only used by many server administrators but is also the foundation
of numerous end-user-oriented services, including Ubuntu and Linux Mint.

The Debian Project has developed a complex form of organization that is
based on a set of fundamental principles defined by the members
themselves. These are delineated in the Debian Social Contract, which
was first formulated in 1997 and subsequently revised in
2004.[^75^](#c3-note-0075){#c3-note-0075a} It stipulates that the
software has to remain "100% free" at all times, in the sense that the
software license guarantees the freedom of unlimited use, modification,
and distribution. The developers understand this primarily as an ethical
obligation. They explicitly regard the project as a contribution "to the
free software community." The social contract demands transparency on
the level of the program code: "We will keep our entire bug report
database open for public view at all times. Reports that people file
online will promptly become visible to others." There are both technical
and ethical considerations behind this. The contract makes no mention at
all of a classical production goal; there is no mention, for instance,
of competitive products or a schedule for future developments. To put it
in Colin Crouch\'s terms, input legitimation comes before output
legitimation. The initiators silently assume that the project\'s basic
ethical, technical, and social orientations will result in high quality,
but they do not place this goal above any other.

The Debian Social Contract is the basis for cooperation and the central
reference point for dealing with conflicts. It forms the normative core
of a community that is distinguished by its equal treatment of ethical,
political, technical, and economic issues. The longer the members have
been cooperating together on this basis, the more binding this attitude
has become for each of them, and the more sustainable the community has
become as a whole. In other words, it has taken on a concrete form that
is relevant to the activities of everyday
life.[^76^](#c3-note-0076){#c3-note-0076a} Today, Debian is a global
project with a stable core of about a thousand developers, most of whom
live in Europe, the United States, and Latin
America.[^77^](#c3-note-0077){#c3-note-0077a} The Debian commons is a
high-grade collaborative organization, []{#Page_157 type="pagebreak"
title="157"}the necessary cooperation for which is enabled by a complex
infrastructure that automates many routine tasks. This is the only
efficient way to manage the program code, which has grown to more than a
hundred million lines. Yet not everything takes place online.
International and local meetings and conferences have long played an
important role. These have not only been venues for exchanging
information and planning the coordination of the project; they have also
helped to create a sense of mutual trust, without which this form of
voluntary collaboration would not be possible.

Despite the considerable size of the Debian Project, it is just one part
of a much larger institutional ecology that includes other communities,
universities, and businesses. Most of the 43,000 software packets of the
Debian distribution are programmed by groups of developers that do not
belong to the Debian Project. Debian is "just" a compilation of these
many individual programs. One of these programs written by outsiders is
the Linux kernel, which in many respects is the central and most complex
program within a GNU/Linux operating system. Governing the organization
of processes and data, it thus forms the interface between hardware and
software. An entire institutional subsystem has been built up around
this complex program, upon which everything else depends. The community
of developers was initiated by Linus Torvalds, who wrote the first
rudimentary kernel in 1991. Even though most of the kernel developers
since then have been paid for their work, their cooperation then and now
has been voluntary and, for the vast majority of contributors, has
functioned without monetary exchange. In order to improve collaboration,
a specialized technological infrastructure has been used -- above all
Torvalds\'s self-developed system Git, which automates many steps for
managing the distributed revisions of code. In all of this, an important
role is played by the Linux Foundation, a non-profit organization that
takes over administrative, legal, and financial tasks for the community.
The foundation is financed by its members, which include large software
companies that contribute as much as \$500,000 a year. This money is
used, for instance, to pay the most important programmers and to
organize working groups, thus ensuring that the development and
distribution of Linux will continue on a long-term basis. The
[]{#Page_158 type="pagebreak" title="158"}businesses that finance the
Linux Foundation may be profit-oriented institutions, but the main work
of the developers -- the program code -- flows back into the common pool
of resources, which the explicitly non-profit Debian Project can then
use to compile its distribution. The freedoms guaranteed by the free
license render this transfer from commercial to non-commercial use not
only legally unproblematic but even desirable to the for-profit service
providers, as they themselves also need entire operating systems and not
just the kernel.

The Debian Project draws from this pool of resources and is at the same
time a part of it. Therefore others can use Debian\'s software code,
which happens to a large extent, for instance through other Linux
distributions. This is not understood as competition for market share
but rather as an expression of the community\'s vitality, which for
Debian represents a central and normative point of pride. As the Debian
Social Contract explicitly states, "We will allow others to create
distributions containing both the Debian system and other works, without
any fee."

Thus, over the years, a multifaceted institutional landscape has been
created in which collaboration can take place between for-profit and
non-profit entities -- between formal organizations and informal
communal formations. Together, they form the software commons.
Communally, they strive to ensure that high-quality free software will
continue to exist for the long term. The coordination necessary for this
is not tension-free. Within individual communities, on the contrary,
there are many conflicts and competitive disputes about people, methods,
and strategic goals. Tensions can also run high between the communities,
foundations, and com­panies that cooperate and compete with one another
(sometimes more directly, sometimes less directly). To cite one example,
the relationship between the Debian Project and Canonical, the company
that produces the Ubuntu operating system, was strained for several
years. At the heart of the conflict was the issue of whether Ubuntu\'s
developers were giving enough back to the Debian Project or whether they
were simply exploiting it. Although the Debian Social Contract expressly
allows the commercial use of its operating system, Canonical was and
remains dependent on the software commons functioning as []{#Page_159
type="pagebreak" title="159"}a whole, because, after all, the company
needs to be able to make use of the latest developments in the Debian
system. It took years to defuse the conflict, and this was only achieved
when forums were set up to guarantee that information and codes could
flow in both directions. The Debian community, for example, introduced
something called a "derivatives front desk" to improve its communication
with programmers of distributions that, like Ubuntu, derive from Debian.
For its part, Canonical improved its internal processes so that code
could flow back into the Debian Project, and their systems for
bug-tracking were partially integrated to avoid duplicates. After
several years of strife, Raphaël Hertzog, a prominent member of the
Debian community, was able to summarize matters as follows:

::: {.extract}
The Debian--Ubuntu relationship used to be a hot topic, but that\'s no
longer the case thanks to regular efforts made on both sides. Conflicts
between individuals still happen, but there are multiple places where
they can be reported and discussed \[...\]. Documentation and
infrastructure are in place to make it easier for volunteers to do the
right thing. Despite all those process improvements, the best results
still come out when people build personal relationships by discussing
what they are doing. It often leads to tight cooperation, up to commit
rights to the source repositories. Regular contacts help build a real
sense of cooperation that no automated process can ever hope to
achieve.[^78^](#c3-note-0078){#c3-note-0078a}
:::

In all successful commons, diverse social relations, mutual trust, and a
common culture play an important role as preconditions for the
consensual resolution of conflicts. This is not a matter of achieving an
ideal -- as Hertzog stressed, not every conflict can be set aside -- but
rather of reaching pragmatic solutions that allow actors to pursue, on
equal terms, their own divergent goals within the common project.

The immense commons of the Debian Project encompasses a nearly
unfathomable number of variations. The distribution is available in over
70 languages (in comparison, Apple\'s operating system is sold in 22
languages), and diverse versions exist to suit different application
contexts, aesthetic preferences, hardware needs, and stability
requirements. Within each of these versions, in turn, there are
innumerable []{#Page_160 type="pagebreak" title="160"}variations that
have been created by individual users with different sets of technical
or creative skills. The final result is a continuously changing service
that can be adapted for countless special requirements, desires, and
other features. To outsiders, this internal differentiation is often
difficult to comprehend, and it can soon leave the impression that there
is little more to it than a tedious variety of essentially the same
thing. What user would ever need 60 different text
editors?[^79^](#c3-note-0079){#c3-note-0079a} For those who would like
to use free software without having to join a group, a greater number of
simple and standardized products have been made available. For
commoners, however, this diversity is enormously important, for it is an
expression of their fundamental freedom to work precisely on those
problems that are closest to their hearts -- even if that means creating
another text editor.

With the success of free software toward the end of the 1990s, producers
in other areas of culture, who were just starting to use the internet,
also began to take an interest in this new manner of production. It
seemed to be a good fit with the vibrant do-it-yourself culture that was
blooming online, and all the more so because there were hardly any
attractive commercial alternatives at the time. This movement was
sustained by the growing tier of professional and non-professional
makers of culture that had emerged over the course of the aforementioned
transformations of the labor market. At first, many online sources were
treated as "quasi-common goods." It was considered normal and desirable
to appropriate them and pass them on to others without first having to
develop a proper commons for such activity. This necessarily led to
conflicts. Unlike free software, which on account of its licensing was
on secure legal ground from the beginning, copyright violations were
rampant in the new do-it-yourself culture. For the sake of engaging in
the referential processes discussed in the previous chapter,
copyright-protected content was (and continues to be) used, reproduced,
and modified without permission. Around the turn of the millennium, the
previously latent conflict between "quasi-commoners" and the holders of
traditional copyrights became an open dispute, which in many cases was
resolved in court. Founded in June 1999, the file-sharing service
Napster gained, over the course of just 18 months, 25 million users
[]{#Page_161 type="pagebreak" title="161"}worldwide who simply took the
distribution of music into their own hands without the authorization of
copyright owners. This incited a flood of litigation that managed to
shut the service down in July 2001. This did not, however, put an end to
the large-scale practice of unauthorized data sharing. New services and
technologies, many of which used (the file-sharing protocol) BitTorrent,
quickly filled in the gap. The number of court cases skyrocketed, not
least because new legal standards expanded the jurisdiction of copyright
law and enabled it to be applied more
aggressively.[^80^](#c3-note-0080){#c3-note-0080a} These conflicts
forced a critical mass of cultural producers to deal with copyright law
and to reconsider how the practices of sharing and modifying could be
perpetuated in the long term. One of the first results of these
considerations was to develop, following the model of free software,
numerous licenses that were tailored to cultural
production.[^81^](#c3-note-0081){#c3-note-0081a} In the cultural
context, free licenses achieved widespread distribution after 2001 with
the arrival of Creative Commons (CC), a California-based foundation that
began to provide easily understandable and adaptable licensing kits and
to promote its services internationally through a network of partner
organizations. This set of licenses made it possible to transfer user
rights to the community (defined by the acceptance of the license\'s
terms and conditions) and thus to create a freely accessible pool of
cultural resources. Works published under a CC license can always be
consumed and distributed free of charge (though not necessarily freely).
Some versions of the license allow works to be altered; others permit
their commercial use; while some, in turn, only allow non-commercial use
and distribution. In comparison with free software licenses, this
greater emphasis on the rights of individual producers over those of the
community, whose freedoms of use can be twice as restricted (in terms of
the right to alter works or use them for commercial ends), gave rise to
the long-standing critique that, with respect to freedom and
communality, CC licenses in fact represent a
regression.[^82^](#c3-note-0082){#c3-note-0082a} A combination of good
timing, user-friendly implementations, and powerful support from leading
American universities, however, resulted in CC licenses becoming the de
facto legal standard of free culture.

Based on a solid legal foundation and thus protected from rampant
copyright conflicts, large and well-structured []{#Page_162
type="pagebreak" title="162"}cultural commons were established, for
instance around the online reference work Wikipedia (which was then,
however, using a different license). As much as the latter is now taken
for granted as an everyday component of informational
life,[^83^](#c3-note-0083){#c3-note-0083a} the prospect of a
commons-generated encyclopedia hardly seemed realistic at the beginning.
Even the founders themselves had little faith in it, and thus Wikipedia
began as a side project. Their primary goal was to develop an
encyclopedia called Nupedia, for which only experts would be allowed to
write entries, which would then have to undergo a seven-stage
peer-review process before being published for free use. From its
beginning, on the contrary, Wikipedia was open for anyone to edit, and
any changes made to it were published without review or delay. By the
time that Nupedia was abandoned in September 2003 (with only 25
published articles), the English-language version of Wikipedia already
consisted of more than 160,000 entries, and the German version, which
came online in May 2001, already had 30,000. The former version reached
1 million entries by January 2003, the latter by December 2009, and by
the beginning of 2015 they had 4.7 million and 1.8 million entries,
respectively. In the meantime (by August 2015), versions have been made
available in 289 other languages, 48 of which have at least 100,000
entries. Both its successes -- its enormous breadth of up-to-date
content, along with its high level of acceptance and quality -- and its
failures, with its low percentage of women editors (around 10 percent),
exhausting discussions, complex rules, lack of young personnel, and
systematic attempts at manipulation, have been well documented because
Wikipedia also guarantees free access to the data generated by the
activities of users, and thus makes the development of the commons
fairly transparent for outsiders.[^84^](#c3-note-0084){#c3-note-0084a}

One of the most fundamental and complex decisions in the history of
Wikipedia was to change its license. The process behind this is
indicative of how thoroughly the community of a commons can be involved
in its decision-making. When Wikipedia was founded in 2001, there was no
established license for free cultural works. The best option available
was the GNU license for free documentation (GLFD), which had been
developed, however, for software documentation. In the following years,
the CC license became the standard, and this []{#Page_163
type="pagebreak" title="163"}gave rise to the legal problem that content
from Wikipedia could not be combined with CC-licensed works, even though
this would have aligned with the intentions of those who had published
content under either of these licenses. To alleviate this problem and
thus facilitate exchange between Wikipedia and other cultural commons,
the Wikimedia Foundation (which holds the rights to Wikipedia) proposed
to place older content retroactively under both licenses, the GLFD and
the equivalent CC license. In strictly legal terms, the foundation would
have been able to make this decision without consulting the community.
However, it would have lacked legitimacy and might have even caused
upheavals within it. In order to avoid this, an elaborate discussion
process was initiated that led to a membership-wide vote. This process
lasted from December 2007 (when the Wikipedia Foundation resolved to
change the license) to the end of May 2009, when the voting period
concluded. All told, 17,462 votes were cast, of which only 10.5 percent
rejected the proposed changes. More important than the result, however,
was the way it had come about: through a long, consensus-building
process of discussion, for which the final vote served above all to make
the achieved consensus unambiguously
clear.[^85^](#c3-note-0085){#c3-note-0085a} All other decisions that
concern the project as a whole were and continue to be reached in a
similar way. Here, too, input legitimation is at least on an equal
footing with output legitimation.

With Wikipedia, a great deal happens voluntarily and without cost, but
that does not mean that no financial resources are needed to organize
and maintain such a commons on a long-term basis. In particular, it is
necessary to raise funds for infrastructure (hardware, administration,
bandwidth), the employees of the Wikipedia Foundation, conferences, and
its own project initiatives -- networking with schools, uni­versities,
and cultural institutions, for example, or increasing the diversity of
the Wikipedia community. In light of the number of people who use the
encyclopedia, it would be possible to finance the project, which accrued
costs of around 45 million dollars during the 2013--14 fiscal year,
through advertising (in the same manner, that is, as commercial mass
media). Yet there has always been a consensus against this. Instead,
Wikipedia is financed through donations. In 2013--14, the website was
able to raise \$51 million, 37 million of []{#Page_164 type="pagebreak"
title="164"}which came from approximately 2.5 million contributors, each
of whom donated just a small sum.[^86^](#c3-note-0086){#c3-note-0086a}
These small contributions are especially interesting because, to a large
extent, they come from people who consider themselves part of the
community but do not do much editing. This suggests that donating is
understood as an opportunity to make a contribution without having to
invest much time in the project. In this case, donating money is thus
not an expression of charity but rather of communal spirit; it is just
one of a diverse number of ways to remain active in a commons. Precisely
because its economy is not understood as an independent sphere with its
own logic (maximizing individual resources), but rather as an integrated
aspect of cultivating a common resource, non-financial and financial
contributions can be treated equally. Both types of contribution
ultimately derive from the same motivation: they are expressions of
appre­ciation for the meaning that the common resource possesses for
one\'s own activity.
:::

::: {.section}
### At the interface with physical space: open data {#c3-sec-0014}

Wikipedia, however, is an exception. None of the other new commons have
managed to attract such large financial contributions. The project known
as OpenStreetMap (OSM), which was founded in 2004 by Steve Coast,
happens to be the most important commons for
geodata.[^87^](#c3-note-0087){#c3-note-0087a} By the beginning of 2016,
it had collected and identified around 5 billion GPS coordinates and
linked them to more than 273 million routes. This work was accomplished
by about half a million people, who surveyed their neighborhoods with
hand-held GPS devices or, where that was not a possibility, extracted
data from satellite images or from public land registries. The project,
which is organized through specialized infrastructure and by local and
international communities, also utilizes a number of automated
processes. These are so important that not only was a "mechanical edit
policy" developed to govern the use of algorithms for editing; the
latter policy was also supplemented by an "automated edits code of
conduct," which defines further rules of behavior. Regarding the
implementation of a new algorithm, for instance, the code states: "We do
not require or recommend a formal vote, but if there []{#Page_165
type="pagebreak" title="165"}is significant objection to your plan --
and even minorities may be significant! -- then change it or drop it
altogether."[^88^](#c3-note-0088){#c3-note-0088a} Here, again, there is
the typical objection to voting and a focus on building a consensus that
does not have to be perfect but simply good enough for the overwhelming
majority of the community to acknowledge it (a "rough consensus").
Today, the coverage and quality of the maps that can be generated from
these data are so good for so many areas that they now represent serious
competition to commercial digital alternatives. OSM data are used not
only by Wikipedia and other non-commercial projects but also
increasingly by large commercial services that need geographical
information and suitable maps but do not want to rely on a commercial
provider whose terms and conditions can change at any time. To the
extent that these commercial applications provide their users with the
opportunity to improve the maps, their input flows back through the
commercial level and into the common pool.

Despite its immense community and its regular requests for donations,
the financial resources of the OSM Foundation, which functions as the
legal entity and supporting organ­ization behind the project, cannot be
compared to those of the Wikipedia Foundation. The OSM Foundation has no
employees, and in 2014 it generated just £88,000 in revenue, half of
which was obtained from donations and half from holding
conferences.[^89^](#c3-note-0089){#c3-note-0089a} That said, OSM is
nevertheless a socially, technologically, and financially robust
commons, though one with a model entirely different from Wikipedia\'s.
Because data are at the heart of the project, its needs for hardware and
bandwidth are negligible compared to Wikipedia\'s, and its servers can
be housed at universities or independently operated by individual
groups. Around this common resource, a global network of companies has
formed that offer services on the basis of complex geodata. In doing so,
they allow improvements to go back into the pool or, if financed by
external sources, they can work directly on the common
infrastructure.[^90^](#c3-note-0090){#c3-note-0090a} Here, too, we find
the characteristic juxtaposition of paid and unpaid work, of commercial
and non-commercial orientations that depend on the same common resource
to pursue their divergent goals. If this goes on for a long time, then
there will be an especially strong (self-)interest among everyone
involved for their own work, []{#Page_166 type="pagebreak"
title="166"}or at least part of it, to benefit the long-term development
of the resource in question. Functioning commons, especially the new
informational ones, are distinguished by the heterogeneity of their
motivations and actors. Just as the Wikipedia project successfully and
transformatively extended the experience of working with free software
to the generation of large bases of knowledge, the community responsible
for OpenStreetMaps succeeded in making the experiences of the Wikipedia
project useful for the creation of a commons based on large datasets,
and managed to adapt these experiences according to the specific needs
of such a project.[^91^](#c3-note-0091){#c3-note-0091a}

It is of great political significance that informational commons have
expanded into the areas of data recording and data use. Control over
data, which specify and describe the world in real time, is an essential
element of the contempor­ary constitution of power. From large volumes
of data, new types of insight can be gained and new strategies for
action can be derived. The more one-sided access to data becomes, the
more it yields imbalances of power.

In this regard, the commons model offers an alternative, for it allows
various groups equal and unobstructed access to this potential resource
of power. This, at least, is how the Open Data movement sees things.
Data are considered "open" if they are available to everyone without
restriction to be used, distributed, and developed freely. For this to
occur, it is necessary to provide data in a standard-compatible format
that is machine-readable. Only in such a way can they be browsed by
algorithms and further processed. Open data are an important
precondition for implementing the power of algorithms in a democratic
manner. They ensure that there can be an effective diversity of
algorithms, for anyone can write his or her own algorithm or commission
others to process data in various ways and in light of various
interests. Because algorithms cannot be neutral, their diversity -- and
the resulting ability to compare the results of different methods -- is
an important precondition for them not becoming an uncontrollable
instrument of power. This can be achieved most dependably through free
access to data, which are maintained and cultivated as a commons.

Motivated by the conviction that free access to data represents a
necessary condition for autonomous activity in the []{#Page_167
type="pagebreak" title="167"}digital condition, many new initiatives
have formed that are devoted to the decentralized collection,
networking, and communal organization of data. For several years, for
instance, there has been a global community of people who observe
airplanes in their field of vision, share this information with one
another, and make it generally accessible. Outside of the tight
community, these data are typically of little interest. Yet it was
through his targeted analysis of this information that the geographer
and artist Trevor Paglen succeeded in mapping out the secret arrests
made by American intelligence services. Ultimately, even the CIA\'s
clandestine airplanes have to take off and land like any others, and
thus they can be observed.[^92^](#c3-note-0092){#c3-note-0092a} Around
the collection of environmental data, a movement has formed whose
adherents enter measurements themselves. To cite just one example:
thanks to a successful crowdfunding campaign that raised more than
\$144,000 (just 39,000 were needed), it was possible to finance the
development of a simple set of sensors called the Air Quality Egg. This
device can measure the concentration of carbon dioxide or nitrogen
dioxide in the air and send its findings to a public database. It
involves the use of relatively simple technologies that are likewise
freely licensed (open hardware). How to build and use it is documented
in such a detailed and user-friendly manner -- in instructional videos
on YouTube, for instance -- that anyone so inclined can put one together
on his or her own, and it would also be easy to have them made on a
large scale as a commercial product. Over time, this has brought about a
network of stations that is able to measure the quality of the air
exactly, locally, and in places that are relevant to users. All of this
information is stored in a global and freely accessible database, from
which it is possible to look up and analyze hyper-local data in real
time and without restrictions.[^93^](#c3-note-0093){#c3-note-0093a}

A list of examples of data commons, both the successful and the
unsuccessful, could go on and on. It will suffice, however, to point out
that many new commons have come about that are redefining the interface
between physical and informational space and creating new strategies for
actions in both directions. The Air Quality Egg, which is typical in
this regard, also demonstrates that commons can develop cumulatively.
Free software and free hardware are preconditions for []{#Page_168
type="pagebreak" title="168"}producing and networking such an object. No
less import­ant are commercial and non-commercial infrastructures for
communal learning, compiling documentation, making infor­mation
available, and thus facilitating access for those interested and
building up the community. All of this depends on free knowledge, from
Wikipedia to scientific databases. This enables a great variety of
actors -- in this case en­vironmental scientists, programmers,
engineers, and interested citizens -- to come together and create a
common frame of reference in which everyone can pursue his or her own
goals and yet do so on the basis of communal resources. This, in turn,
has given rise to a new commons, namely that of environmental data.

Not all data can or must be collected by individuals, for a great deal
of data already exists. That said, many scientific and state
institutions face the problem of having data that, though nominally
public (or at least publicly funded), are in fact extremely difficult
for third parties to use. Such information may exist, but it is kept in
institutions to which there is no or little public access, or it exists
only in analog or non-machine-readable formats (as PDFs of scanned
documents, for instance), or its use is tied to high license fees. One
of the central demands of the Open Data and Open Access movements is
thus to have free access to these collections. Yet there has been a
considerable amount of resistance. Whether for political or economic
reasons, many public and scientific institutions do not want their data
to be freely accessible. In many cases, moreover, they also lack the
competence, guidelines, budgets, and internal processes that would be
necessary to make their data available to begin with. But public
pressure has been mounting, not least through initiatives such as the
global Open Data Index, which compares countries according to the
accessibility of their information.[^94^](#c3-note-0094){#c3-note-0094a}
In Germany, the Digital Openness Index evaluates states and communities
in terms of open data, the use of open-source software, the availability
of open infrastructures (such as free internet access in public places),
open policies (the licensing of public information,
freedom-of-information laws, the transparency of budget planning, etc.),
and open education (freely accessible educational resources, for
instance).[^95^](#c3-note-0095){#c3-note-0095a} The results are rather
sobering. The Open Data Index has identified 10 []{#Page_169
type="pagebreak" title="169"}different datasets that ought to be open,
including election results, company registries, maps, and national
statistics. A study of 97 countries revealed that, by the middle of
2015, only 11 percent of these datasets were entirely freely accessible
and usable.

Although public institutions are generally slow and resistant in making
their data freely available, important progress has nevertheless been
made. Such progress indicates not only that the new commons have
developed their own structures in parallel with traditional
institutions, but also that the commoners have begun to make new demands
on established institutions. These are intended to change their internal
processes and their interaction with citizens in such a way that they
support the creation and growth of commons. This is not something that
can be achieved overnight, for the institutions in question need to
change at a fundamental level with respect to their procedures,
self-perception, and relation to citizens. This is easier said than
done.
:::

::: {.section}
### Municipal infrastructures as commons: citizen networks {#c3-sec-0015}

The demands for open access to data, however, are not exhausted by
attempts to redefine public institutions and civic participation. In
fact, they go far beyond that. In Germany, for instance, there has been
a recent movement toward (re-)communalizing the basic provision of water
and energy. Its goal is not merely to shift the ownership structure from
private to public. Rather, its intention is to reorient the present
institutions so that, instead of operating entirely on the basis of
economic criteria, they also take into account democratic, ecological,
and social factors. These efforts reached a high point in November 2013,
when the population of Berlin was called upon to vote over the
communalization of the power supply. Formed in 2011, a non-partisan
coalition of NGOs and citizens known as the Berlin Energy Roundtable had
mobilized to take over the local energy grid, whose license was due to
become available in 2014. The proposal was for the network to be
administered neither entirely privately nor entirely by the public.
Instead, the license was to be held by a newly formed municipal utility
that would not only []{#Page_170 type="pagebreak" title="170"}organize
the efficient operation of the grid but also pursue social causes, such
as the struggles against energy poverty and power cuts, and support
ecological causes, including renewable energy sources and energy
conservation. It was intended, moreover, for the utility to be
democratically organized; that is, for it to offer expanded
opportunities for civic participation on the basis of the complete
transparency of its internal processes in order to increase -- and
ensure for the long term -- the acceptance and identification of
citizens.

Yet it did not get that far. Even though it was conceivably close, the
referendum failed to go through. While 83 percent voted in favor of the
new utility, the necessary quorum of 25 percent of all eligible voters
was not quite achieved (the voter turnout was 24.71 percent).
Nevertheless, the vote represented a milestone. For the first time ever
in a large European metropolis, a specific model "beyond the market and
the state" had been proposed for an essential aspect of everyday life
and put before the people. A central component of infrastructure, the
reliability of which is absolutely indispensable for life in any modern
city, was close to being treated as a common good, supported by a new
institution, and governed according to a statute that explicitly
formulated economic, social, ecological, and democratic goals on equal
terms. This would not have resulted in a commons in the strict sense,
but rather in a new public institution that would have adopted and
embodied the values and orientations that, because of the activity of
commons, have increasingly become everyday phenomena in the digital
condition.

In its effort to develop institutional forms beyond the market and the
state, the Berlin Energy Roundtable is hardly unique. It is rather part
of a movement that is striving for fundamental change and is in many
respects already quite advanced. In Denmark, for example, not only does
a comparatively large amount of energy come from renewable sources (27.2
percent of total use, as of 2014), but 80 percent of the country\'s
wind-generated electricity is produced by self-administered cooperatives
or by individual people and
households.[^96^](#c3-note-0096){#c3-note-0096a} The latter, as is
typical of commons, function simultaneously as producers and consumers.

It is not a coincidence that commons have begun to infiltrate the energy
sector. As Jeremy Rifkin has remarked:[]{#Page_171 type="pagebreak"
title="171"}

::: {.extract}
The generation that grew up on the Communication Internet and that takes
for granted its right to create value in distributed, collaborative,
peer-to-peer virtual commons has little hesitation about generating
their own green electricity and sharing it on an Energy Internet. They
find themselves living through a deepening global economic crisis and an
even more terrifying shift in the earth\'s climate, caused by an
economic system reliant on fossil fuel energy and managed by
centralized, top-down command and control systems. If they fault the
giant telecommunications, media and entertainment companies for blocking
their right to collaborate freely with their peers in an open
Information Commons, they are no less critical of the world\'s giant
energy, power, and utility companies, which they blame, in part, for the
high price of energy, a declining economy and looming environmental
crisis.[^97^](#c3-note-0097){#c3-note-0097a}
:::

It is not necessary to see in this, as Rifkin and a few others have
done, the ineluctable demise of
capitalism.[^98^](#c3-note-0098){#c3-note-0098a} Yet, like the influence
of post-democratic institutions over social mass media and beyond, the
commons are also shaping new expectations about possible courses of
action and about the institutions that might embody these possibilities.
:::

::: {.section}
### Eroding the commons: cloud software and the sharing economy {#c3-sec-0016}

Even if the commons have recently enjoyed a renaissance, their continued
success is far from guaranteed. This is not only because legal
frameworks, then and now, are not oriented toward them. Two movements
currently stand out that threaten to undermine the commons from within
before they can properly establish themselves. These movements have been
exploiting certain aspects of the commons while pursuing goals that are
harmful to them. Thus, there are ways of using communal resources in
order to offer, on their basis, closed and centralized services. An
example of this is so-called cloud software; that is, applications that
no longer have to be installed on the computer of the user but rather
are centrally run on the providers\' servers. Such programs are no
longer operated in the traditional sense, and thus they are exempt from
the obligations mandated by free licenses. They do not, []{#Page_172
type="pagebreak" title="172"}in other words, have to make their readable
source code available along with their executable program code. Cloud
providers are thus able to make wide use of free software, but they
contribute very little to its further development. The changes that they
make are implemented exclusively on their own computers and therefore do
not have to be made public. They therefore follow the letter of the
license, but not its spirit. Through the control of services, it is also
possible for nominally free and open-source software to be centrally
controlled. Google\'s Android operating system for smartphones consists
largely of free software, but by integrating it so deeply with its
closed applications (such as Google Maps and Google Play Store), the
company ensures that even modified versions of the system will supply
data in which Google has an
interest.[^99^](#c3-note-0099){#c3-note-0099a}

The idea of the communal use and provision of resources is eroded most
clearly by the so-called sharing economy, especially by companies such
as the short-term lodging service Airbnb or Uber, which began as a taxi
service but has since expanded into other areas of business. In such
cases, terms like "open" or "sharing" do little more than give a trendy
and positive veneer to hyper-capitalistic structures. Instead of
supporting new forms of horizontal cooperation, the sharing economy is
forcing more and more people into working conditions in which they have
to assert themselves on their own, without insurance and with complete
flexibility, all the while being coordin­ated by centralized,
internet-based platforms.[^100^](#c3-note-0100){#c3-note-0100a} Although
the companies in question take a significant portion of overall revenue
for their "intermediary" services, they act as though they merely
facilitate such work and thus take no responsibility for their "newly
self-employed" freelance
workforce.[^101^](#c3-note-0101){#c3-note-0101a} The risk is passed on
to individual providers, who are in constant competition with one
another, and this only heightens the precariousness of labor relations.
As is typical of post-democratic institutions, the sharing economy has
allowed certain disparities to expand into broader sectors of society,
namely the power and income gap that exists between those who
"voluntarily" use these services and the providers that determine the
conditions imposed by the platforms in question.[]{#Page_173
type="pagebreak" title="173"}
:::
:::

::: {.section}
Against a Lack of Alternatives {#c3-sec-0017}
------------------------------

For now, the digital condition has given rise to two highly divergent
political tendencies. The tendency toward "post-democracy" is
essentially leading to an authoritarian society. Although this society
may admittedly contain a high degree of cultural diversity, and although
its citizens are able to (or have to) lead their lives in a
self-responsible manner, they are no longer able to exert any influence
over the political and economic structures in which their lives are
unfolding. On the basis of data-intensive and comprehensive
surveillance, these structures are instead shaped disproportionally by
an influential few. The resulting imbalance of power has been growing
steadily, as has income inequality. In contrast to this, the tendency
toward commons is leading to a renewal of democracy, based on
institutions that exist outside of the market and the state. At its core
this movement involves a new combination of economic, social, and
(ever-more pressing) ecological dimensions of everyday life on the basis
of data-intensive participatory processes.

What these two developments share in common is their comprehensive
realization of the infrastructural possibilities of the present. Both of
them develop new relations of production on the basis of new productive
forces (to revisit the terminology introduced at the beginning of this
chapter) or, in more general terms, they create suitable social
institutions for these new opportunities. In this sense, both
developments represent coherent and comprehensive answers to the
Gutenberg Galaxy\'s long-lasting crisis of cultural forms and social
institutions.

It remains to be seen whether one of these developments will prevail
entirely or whether and how they will coexist. Despite all of the new
and specialized methods for making predictions, the future is still
largely unpredictable. Too many moving variables are at play, and they
are constantly influencing one another. This is not least the case
because everyone\'s activity -- at times singularly aggregated, at times
collectively organized -- is contributing directly and indirectly to
these contradictory developments. And even though an individual or
communal contribution may seem small, it is still exactly []{#Page_174
type="pagebreak" title="174"}that: a contribution to a collective
movement in one direction or the other. This assessment should not be
taken as some naïve appeal along the lines of "Be the change you want to
see!" The issue here is not one of personal attitudes but rather of
social structures. Effective change requires forms of organization that
are able to implement it for the long term and in the face of
resistance. In this regard, the side of the commons has a great deal
more work to do.

Yet if, despite all of the simplifications that I have made, this
juxtaposition of post-democracy and the commons has revealed anything,
it is that even rapid changes, whose historical and structural
dimensions cannot be controlled on account of their overwhelming
complexity, are anything but fixed in their concrete social
formulations. Even if it is impossible to preserve the old institutions
and cultural forms in their traditional roles -- regardless of all the
historical achievements that may be associated with them -- the dispute
over what world we want to live in and the goals that should be achieved
by the available potential of the present is as open as ever. And such
is the case even though post-democracy wishes to abolish the political
itself and subordinate everything to a technocratic lack of
alternatives. The development of the commons, after all, has shown that
genuine, fundamental, and cutting-edge alternatives do indeed exist. The
contradictory nature of the present is keeping the future
open.[]{#Page_175 type="pagebreak" title="175"}
:::

::: {.section .notesSet type="rearnotes"}
[]{#notesSet}Notes {#c3-ntgp-9999}
------------------

::: {.section .notesList}
[1](#c3-note-0001a){#c3-note-0001}  Karl Marx, *A Contribution to the
Critique of Political Economy*, trans. S. W. Ryazanskaya (London:
Lawrence and Wishart, 1971), p. 21.[]{#Page_196 type="pagebreak"
title="196"}

[2](#c3-note-0002a){#c3-note-0002}  See, for instance, Tomasz Konicz and
Florian Rötzer (eds), *Aufbruch ins Ungewisse: Auf der Suche nach
Alternativen zur kapitalistischen Dauerkrise* (Hanover: Heise
Zeitschriften Verlag, 2014).

[3](#c3-note-0003a){#c3-note-0003}  Jacques Rancière, *Disagreement:
Politics and Philosophy*, trans. Julie Rose (Minneapolis, MN: University
of Minnesota Press, 1999), p. 102 (the emphasis is original).

[4](#c3-note-0004a){#c3-note-0004}  Colin Crouch, *Post-Democracy*
(Cambridge: Polity, 2004), p. 4.

[5](#c3-note-0005a){#c3-note-0005}  Ibid., p. 6.

[6](#c3-note-0006a){#c3-note-0006}  Ibid., p. 96.

[7](#c3-note-0007a){#c3-note-0007}  These questions have already been
discussed at length, for instance in a special issue of the journal
*Neue Soziale Be­wegungen* (vol. 4, 2006) and in the first two issues of
the journal *Aus Politik und Zeitgeschichte* (2011).

[8](#c3-note-0008a){#c3-note-0008}  See Jonathan B. Postel, "RFC 821,
Simple Mail Transfer Protocol," *Information Sciences Institute:
University of Southern California* (August 1982), online: "An important
feature of SMTP is its capability to relay mail across transport service
environments."

[9](#c3-note-0009a){#c3-note-0009}  One of the first providers of
Webmail was Hotmail, which became available in 1996. Just one year
later, the company was purchased by Microsoft.

[10](#c3-note-0010a){#c3-note-0010}  Barton Gellmann and Ashkan Soltani,
"NSA Infiltrates Links to Yahoo, Google Data Centers Worldwide, Snowden
Documents Say," *Washington Post* (October 30, 2013), online.

[11](#c3-note-0011a){#c3-note-0011}  Initiated by hackers and activists,
the Mailpile project raised more than \$160,000 in September 2013 (the
fundraising goal had been just \$100,000). In July 2014, the rather
business-oriented project ProtonMail raised \$400,000 (its target, too,
had been just \$100,000).

[12](#c3-note-0012a){#c3-note-0012}  In July 2014, for instance, Google
announced that it would support "end-to-end" encryption for emails. See
"Making End-to-End Encryption Easier to Use," *Google Security Blog*
(June 3, 2014), online.

[13](#c3-note-0013a){#c3-note-0013}  Not all services use algorithms to
sort through data. Twitter does not filter the news stream of individual
users but rather allows users to create their own lists or to rely on
external service providers to select and configure them. This is one of
the reasons why Twitter is regarded as "difficult." The service is so
centralized, however, that this can change at any time, which indeed
happened at the beginning of 2016.

[14](#c3-note-0014a){#c3-note-0014}  Quoted from "Schrems:
'Facebook-Abstimmung ist eine Farce'," *Futurezone.at* (July 4, 2012),
online \[--trans.\].

[15](#c3-note-0015a){#c3-note-0015}  Elliot Schrage, "Proposed Updates
to Our Governing Documents," [Facebook.com](http://Facebook.com)
(November 21, 2011), online.[]{#Page_197 type="pagebreak" title="197"}

[16](#c3-note-0016a){#c3-note-0016}  Quoted from the documentary film
*Terms and Conditions May Apply* (2013), directed by Cullen Hoback.

[17](#c3-note-0017a){#c3-note-0017}  Felix Stalder and Christine Mayer,
"Der zweite Index: Suchmaschinen, Personalisierung und Überwachung," in
Konrad Becker and Felix Stalder (eds), *Deep Search: Die Politik des
Suchens jenseits von Google* (Innsbruck: Studienverlag, 2009), pp.
112--31.

[18](#c3-note-0018a){#c3-note-0018}  Thus, in 2012, Google announced
under a rather generic and difficult-to-Google headline that, from now
on, "we may combine information you\'ve provided from one service with
information from other services." See "Updating Our Privacy Policies and
Terms of Service," *Google Official Blog* (January 24, 2012), online.

[19](#c3-note-0019a){#c3-note-0019}  Wolfie Christl, "Kommerzielle
digitale Überwachung im Alltag," *Studie im Auftrag der
Bundesarbeitskammer* (November 2014), online.

[20](#c3-note-0020a){#c3-note-0020}  Viktor Mayer-Schönberger and
Kenneth Cukier, *Big Data: A Revolution That Will Change How We Live,
Work and Think* (Boston, MA: Houghton Mifflin Harcourt, 2013).

[21](#c3-note-0021a){#c3-note-0021}  Carlos Diuk, "The Formation of
Love," *Facebook Data Science Blog* (February 14, 2014), online.

[22](#c3-note-0022a){#c3-note-0022}  Facebook could have determined this
simply by examining the location data that were transmitted by its own
smartphone app. The study in question, however, did not take such
information into account.

[23](#c3-note-0023a){#c3-note-0023}  Dan Lyons, "A Lot of Top
Journalists Don\'t Look at Traffic Numbers: Here\'s Why," *Huffington
Post* (March 27, 2014), online.

[24](#c3-note-0024a){#c3-note-0024}  Adam Kramer et al., "Experimental
Evidence of Massive-Scale Emotional Contagion through Social Networks,"
*Proceedings of the National Academy of Sciences* 111 (2014): 8788--90.

[25](#c3-note-0025a){#c3-note-0025}  In all of these studies, it was
presupposed that users present themselves naïvely and entirely
truthfully. If someone writes something positive ("I\'m doing great!"),
it is assumed that this person really is doing well. This, of course, is
a highly problematic assumption. See John M. Grohl, "Emotional Contagion
on Facebook? More Like Bad Research Methods," *PsychCentral* (June 23,
2014), online.

[26](#c3-note-0026a){#c3-note-0026}  See Adrienne LaFrance, "Even the
Editor of Facebook\'s Mood Study Thought It Was Creepy," *The Atlantic*
(June 29, 2014), online: "\[T\]he authors \[...\] said their local
institutional review board had approved it -- and apparently on the
grounds that Facebook apparently manipulates people\'s News Feeds all
the time."

[27](#c3-note-0027a){#c3-note-0027}  In a rare moment of openness, the
founder of a large dating service made the following remark: "But guess
what, everybody: []{#Page_198 type="pagebreak" title="198"}if you use
the Internet, you\'re the subject of hundreds of experiments at any
given time, on every site. That\'s how websites work." See Christian
Rudder, "We Experiment on Human Beings!" *OKtrends* (July 28, 2014),
online.

[28](#c3-note-0028a){#c3-note-0028}  Zoe Corbyn, "Facebook Experiment
Boosts US Voter Turnout," *Nature* (September 12, 2012), online. Because
of the relative homogeneity of social groups, it can be assumed that a
large majority of those who were indirectly influenced to vote have the
same political preferences as those who were directly influenced.

[29](#c3-note-0029a){#c3-note-0029}  In the year 2000, according to the
official count, George W. Bush won the decisive state of Florida by a
mere 537 votes.

[30](#c3-note-0030a){#c3-note-0030}  Jonathan Zittrain, "Facebook Could
Decide an Election without Anyone Ever Finding Out," *New Republic*
(June 1, 2014), online.

[31](#c3-note-0031a){#c3-note-0031}  This was the central insight that
Norbert Wiener drew from his experiments on air defense during World War
II. Although it could never be applied during the war itself, it would
nevertheless prove of great importance to the development of
cybernetics.

[32](#c3-note-0032a){#c3-note-0032}  Gregory Bateson, "Social Planning
and the Concept of Deutero-learning," in Bateson, *Steps to an Ecology
of Mind: Collected Essays in Anthropology, Psychiatry, Evolution and
Epistemology* (London: Jason Aronson, 1972), pp. 166--82, at 177.

[33](#c3-note-0033a){#c3-note-0033}  Tiqqun, "The Cybernetic
Hypothesis," p. 4 (online).

[34](#c3-note-0034a){#c3-note-0034}  B. F. Skinner, *The Behavior of
Organisms: An Experimental Analysis* (New York: Appleton Century, 1938).

[35](#c3-note-0035a){#c3-note-0035}  Richard H. Thaler and Cass
Sunstein, *Nudge: Improving Decisions about Health, Wealth and
Happiness* (New York: Penguin, 2008).

[36](#c3-note-0036a){#c3-note-0036}  It happened repeatedly, for
instance, that pictures of breastfeeding mothers would be removed
because they apparently violated Facebook\'s rule against sharing
pornography. After a long protest, Facebook changed its "community
standards" in 2014. Under the term "Nudity," it now reads as follows:
"We also restrict some images of female breasts if they include the
nipple, but we always allow photos of women actively engaged in
breastfeeding or showing breasts with post-mastectomy scarring. We also
allow photographs of paintings, sculptures and other art that depicts
nude figures." See "Community Standards,"
[Facebook.com](http://Facebook.com) (2017), online.

[37](#c3-note-0037a){#c3-note-0037}  Michael Seemann, *Digital Tailspin:
Ten Rules for the Internet after Snowden* (Amsterdam: Institute for
Network Cultures, 2015).

[38](#c3-note-0038a){#c3-note-0038}  The exception to this is fairtrade
products, in which case it is attempted to legitimate their higher
prices with reference to []{#Page_199 type="pagebreak" title="199"}the
input -- that is, to the social and ecological conditions of their
production.

[39](#c3-note-0039a){#c3-note-0039}  This is only partially true,
however, as more institutions (universities, for instance) have begun to
outsource their technical infrastructure (to Google Mail, for example).
In such cases, people are indeed being coerced, in the classical sense,
to use these services.

[40](#c3-note-0040a){#c3-note-0040}  Mary Madden et al., "Teens, Social
Media and Privacy," *Pew Research Center: Internet, Science & Tech* (May
21, 2013), online.

[41](#c3-note-0041a){#c3-note-0041}  Meta-data are data that provide
information about other data. In the case of an email, the header lines
(the sender, recipient, date, subject, etc.) form the meta-data, while
the data are made up of the actual content of communication. In
practice, however, the two categories cannot always be sharply
distinguished from one another.

[42](#c3-note-0042a){#c3-note-0042}  By manipulating online polls, for
instance, or flooding social mass media with algorithmically generated
propaganda. See Glen Greenwald, "Hacking Online Polls and Other Ways
British Spies Seek to Control the Internet," *The Intercept* (July 14,
2014), online.

[43](#c3-note-0043a){#c3-note-0043}  Jeremy Scahill and Glenn Greenwald,
"The NSA\'s Secret Role in the US Assassination Program," *The
Intercept* (February 10, 2014), online.

[44](#c3-note-0044a){#c3-note-0044}  Regarding the interconnections
between Google and the US State Department, see Julian Assange, *When
Google Met WikiLeaks* (New York: O/R Books, 2014).

[45](#c3-note-0045a){#c3-note-0045}  For a catalog of these
publications, see the DARPA website:
\<[opencatalog.darpa.mil/SMISC.html](http://opencatalog.darpa.mil/SMISC.html)\>.

[46](#c3-note-0046a){#c3-note-0046}  See the military\'s own description
of the project at:
\<[minerva.dtic.mil/funded.html](http://minerva.dtic.mil/funded.html)\>.

[47](#c3-note-0047a){#c3-note-0047}  Such is the goal stated on the
project\'s homepage: \<\>.

[48](#c3-note-0048a){#c3-note-0048}  Bruce Schneier, "Don\'t Listen to
Google and Facebook: The Public--Private Surveillance Partnership Is
Still Going Strong," *The Atlantic* (March 25, 2014), online.

[49](#c3-note-0049a){#c3-note-0049}  See the documentary film *Low
Definition Control* (2011), directed by Michael Palm.

[50](#c3-note-0050a){#c3-note-0050}  Felix Stalder, "In der zweiten
digitalen Phase: Daten versus Kommunikation," *Le Monde Diplomatique*
(February 14, 2014), online.

[51](#c3-note-0051a){#c3-note-0051}  In 2009, the European Parliament
and the European Council ratified Directive 2009/72/EC, which stipulates
that, by the year 2020, 80 percent of all households in the EU will have
to be equipped with an intelligent metering system.[]{#Page_200
type="pagebreak" title="200"}

[52](#c3-note-0052a){#c3-note-0052}  There is no consensus about how or
whether smart meters will contribute to the more efficient use of
energy. On the contrary, one study commissioned by the German Federal
Ministry for Economic Affairs and Energy concluded that the
comprehensive implementation of smart metering would have negative
economic effects for consumers. See Helmut Edelmann and Thomas Kästner,
"Cost--Benefit Analysis for the Comprehensive Use of Smart Metering,"
*Ernst & Young* (June 2013), online.

[53](#c3-note-0053a){#c3-note-0053}  Quoted from "United Nations Working
towards Urbanization," *United Nations Urbanization Agenda* (July 7,
2015), online. For a comprehensive critique of such visions, see Adam
Greenfield, *Against the Smart City* (New York City: Do Projects, 2013).

[54](#c3-note-0054a){#c3-note-0054}  Stefan Selke, *Lifelogging: Warum
wir unser Leben nicht digitalen Technologien überlassen sollten*
(Berlin: Econ, 2014).

[55](#c3-note-0055a){#c3-note-0055}  Rainer Schneider, "Rabatte für
Gesundheitsdaten: Was die deutschen Krankenversicherer planen," *ZDNet*
(December 18, 2014), online \[--trans.\].

[56](#c3-note-0056a){#c3-note-0056}  Frank Pasquale, *The Black Box
Society: The Secret Algorithms that Control Money and Information*
(Cambridge, MA: Harvard University Press, 2015).

[57](#c3-note-0057a){#c3-note-0057}  "Facebook Gives People around the
World the Power to Publish Their Own Stories," *Facebook Help Center*
(2017), online.

[58](#c3-note-0058a){#c3-note-0058}  Lena Kampf et al., "Deutsche im
NSA-Visier: Als Extremist gebrandmarkt," *Tagesschau.de* (July 3, 2014),
online.

[59](#c3-note-0059a){#c3-note-0059}  Florian Klenk, "Der Prozess gegen
Josef S.," *Falter* (July 8, 2014), online.

[60](#c3-note-0060a){#c3-note-0060}  Zygmunt Bauman, *Liquid Modernity*
(Cambridge: Polity, 2000), p. 35.

[61](#c3-note-0061a){#c3-note-0061}  This is so regardless of whether
the dominant regime, eager to seem impervious to opposition, represents
itself as the one and only alternative. See Byung-Chul Han, "Why
Revolution Is No Longer Possible," *Transformation* (October 23, 2015),
online.

[62](#c3-note-0062a){#c3-note-0062}  See the *Süddeutsche Zeitung*\'s
special website devoted to the "Offshore Leaks":
\.

[63](#c3-note-0063a){#c3-note-0063}  The *Süddeutsche Zeitung*\'s
website devoted to the "Luxembourg Leaks" can be found at:
\.

[64](#c3-note-0064a){#c3-note-0064}  See the documentary film
*Citizenfour* (2014), directed by Lara Poitras.

[65](#c3-note-0065a){#c3-note-0065}  Felix Stalder, "WikiLeaks und die
neue Ökologie der Nach­richtenmedien," in Heinrich Geiselberger (ed.),
*WikiLeaks und die Folgen* (Berlin: Suhrkamp, 2011), pp.
96--110.[]{#Page_201 type="pagebreak" title="201"}

[66](#c3-note-0066a){#c3-note-0066}  Yochai Benkler, "Coase\'s Penguin,
or, Linux and the Nature of the Firm," *Yale Law Journal* 112 (2002):
369--446.

[67](#c3-note-0067a){#c3-note-0067}  For an overview of the many commons
traditions, see David Bollier and Silke Helfrich, *The Wealth of the
Commons: A World beyond Market and State* (Amherst: Levellers Press,
2012).

[68](#c3-note-0068a){#c3-note-0068}  Massimo De Angelis and Stavros
Stavrides, "On the Commons: A Public Interview," *e-flux* 17 (June
2010), online.

[69](#c3-note-0069a){#c3-note-0069}  Elinor Ostrom, *Governing the
Commons: The Evolution of Institutions for Collective Action*
(Cambridge: Cambridge University Press, 1990).

[70](#c3-note-0070a){#c3-note-0070}  Michael McGinnis and Elinor Ostrom,
"Design Principles for Local and Global Commons," *International
Political Economy and International Institutions* 2 (1996): 465--93.

[71](#c3-note-0071a){#c3-note-0071}  I say "allegedly" because the
argument about their inevitable tragedy, which has been made without any
empirical evidence, falsely conceives of the commons as a limited but
fully unregulated resource. Because people are only interested in
maximizing their own short-term benefits -- or so the conclusion goes --
the resource will either have to be privatized or administered by the
government in order to protect it from being over-used and to ensure the
well-being of everyone involved. It was never taken into consideration
that users could speak with one another and organize themselves. See
Garrett Hardin, "The Tragedy of the Commons," *Science* 162 (1968):
1243--8.

[72](#c3-note-0072a){#c3-note-0072}  Jonathan Rowe, "The Real Tragedy:
Ecological Ruin Stems from What Happens to -- Not What Is Caused by --
the Commons," *On the Commons* (April 30, 2013), online.

[73](#c3-note-0073a){#c3-note-0073}  James Boyle, "A Politics of
Intellectual Property: Environmentalism for the Net?" *Duke Law Journal*
47 (1997): 87--116.

[74](#c3-note-0074a){#c3-note-0074}  Quoted from:
\<[debian.org/intro/about.html](http://debian.org/intro/about.html)\>.

[75](#c3-note-0075a){#c3-note-0075}  The Debian Social Contract can be
read at: \<\>.

[76](#c3-note-0076a){#c3-note-0076}  Gabriella E. Coleman and Benjamin
Hill, "The Social Production of Ethics in Debian and Free Software
Communities: Anthropological Lessons for Vocational Ethics," in Stefan
Koch (ed.), *Free/Open Source Software Development* (Hershey, PA: Idea
Group, 2005), pp. 273--95.

[77](#c3-note-0077a){#c3-note-0077}  While it is relatively easy to
identify the inner circle of such a project, it is impossible to
determine the number of those who have contributed to it. This is
because, among other reasons, the distinction between producers and
consumers is so fluid that any firm line drawn between them for
quantitative purposes would be entirely arbitrary. Should someone who
writes the documentation be considered a producer of a software
[]{#Page_202 type="pagebreak" title="202"}project? To be counted as
such, is it sufficient to report a single bug? Or to confirm the
validity of a bug report that has already been sent? Should everyone be
counted who has helped another person solve a problem in a forum?

[78](#c3-note-0078a){#c3-note-0078}  Raphaël Hertzog, "The State of the
Debian--Ubuntu Relationship" (December 6, 2010), online.

[79](#c3-note-0079a){#c3-note-0079}  This, in any case, is the number of
free software programs that appears in Wikipedia\'s entry titled "List
of Text Editors." This list, however, is probably incomplete.

[80](#c3-note-0080a){#c3-note-0080}  In this regard, the most
significant legal changes were enacted through the Copyright Treaty of
the World Intellectual Property Organization (1996), the US Digital
Millennium Copyright Act (1998), and the EU guidelines for the
harmonization of certain aspects of copyright (2001). Since 2006, a
popular tactic in Germany and elsewhere has been to issue floods of
cease-and-desist letters. This involves sending tens of thousands of
semi-automatically generated threats of legal action with demands for
payment in response to the presumably unauthorized use of
copyright-protected material.

[81](#c3-note-0081a){#c3-note-0081}  Examples include the Open Content
License (1998) and the Free Art License (2000).

[82](#c3-note-0082a){#c3-note-0082}  Benjamin Mako Hill, "Towards a
Standard of Freedom: Creative Commons and the Free Software Movement,"
*mako.cc* (June 29, 2005), online.

[83](#c3-note-0083a){#c3-note-0083}  Since 2007, Wikipedia has
continuously been one of the 10 most-used websites.

[84](#c3-note-0084a){#c3-note-0084}  One of the best studies of
Wikipedia remains Christian Stegbauer, *Wikipedia: Das Rätsel der
Kooperation* (Wiesbaden: Verlag für Sozialwissenschaften, 2009).

[85](#c3-note-0085a){#c3-note-0085}  Dan Wielsch, "Governance of Massive
Multiauthor Collabor­ation -- Linux, Wikipedia and Other Networks:
Governed by Bilateral Contracts, Partnerships or Something in Between?"
*JIPITEC* 1 (2010): 96--108.

[86](#c3-note-0086a){#c3-note-0086}  See Wikipedia\'s 2013--14
fundraising report at:
\<[meta.wikimedia.org/wiki/Fundraising/2013-14\_Report](http://meta.wikimedia.org/wiki/Fundraising/2013-14_Report)\>.

[87](#c3-note-0087a){#c3-note-0087}  Roland Ramthun, "Offene Geodaten
durch OpenStreetMap," in Ulrich Herb (ed.), *Open Initiatives: Offenheit
in der digitalen Welt und Wissenschaft* (Saarbrücken: Universaar, 2012),
pp. 159--84.

[88](#c3-note-0088a){#c3-note-0088}  "Automated Edits Code of Conduct,"
[WikiOpenStreetMap.org](http://WikiOpenStreetMap.org) (March 15, 2015),
online.

[89](#c3-note-0089a){#c3-note-0089}  See the information provided at:
\<[wiki.osmfoundation.org/wiki/Finances](http://wiki.osmfoundation.org/wiki/Finances)\>.

[90](#c3-note-0090a){#c3-note-0090}  As part of its "Knight News
Challenge," for instance, the American Knight Foundation gave \$570,000
in 2012 to the []{#Page_203 type="pagebreak" title="203"}company Mapbox
in order for the latter to make improvements to OSM\'s infrastructure.

[91](#c3-note-0091a){#c3-note-0091}  This was accomplished, for
instance, by introducing methods for data indexing and quality control.
See Ramthum, "Offene Geodaten durch OpenStreetMap" (cited above).

[92](#c3-note-0092a){#c3-note-0092}  Trevor Paglen and Adam C. Thompson,
*Torture Taxi: On the Trail of the CIA\'s Rendition Flights* (Hoboken,
NJ: Melville House, 2006).

[93](#c3-note-0093a){#c3-note-0093}  See the project\'s website:
\<[airqualityegg.com](http://airqualityegg.com)\>.

[94](#c3-note-0094a){#c3-note-0094}  See the project\'s homepage:
\<[index.okfn.org](http://index.okfn.org)\>.

[95](#c3-note-0095a){#c3-note-0095}  The homepage of the Digital
Openness Index can be found at: \<[do-index.org](http://do-index.org)\>.

[96](#c3-note-0096a){#c3-note-0096}  Tildy Bayar, "Community Wind
Arrives Stateside," *Renewable Energy World* (July 5, 2012), online.

[97](#c3-note-0097a){#c3-note-0097}  Jeremy Rifkin, *The Zero Marginal
Cost Society: The Internet of Things, the Collaborative Commons and the
Eclipse of Capitalism* (New York: Palgrave Macmillan, 2014), p. 217.

[98](#c3-note-0098a){#c3-note-0098}  See, for instance, Ludger
Eversmann, *Post-Kapitalismus: Blueprint für die nächste Gesellschaft*
(Hanover: Heise Zeitschriften Verlag, 2014).

[99](#c3-note-0099a){#c3-note-0099}  Ron Amadeo, "Google\'s Iron Grip on
Android: Controlling Open Source by Any Means Necessary," *Ars Technica*
(October 21, 2013), online.

[100](#c3-note-0100a){#c3-note-0100}  Seb Olma, "To Share or Not to
Share," [nettime.org](http://nettime.org) (October 20, 2014), online.

[101](#c3-note-0101a){#c3-note-0101}  Susie Cagle, "The Case against
Sharing," *The Nib* (May 27, 2014), online.[]{#Page_204 type="pagebreak"
title="204"}
:::
:::

[Copyright page]{.chapterTitle} {#ffirs03}
=
::: {.section}
First published in German as *Kultur der Digitalitaet* © Suhrkamp Verlag,
Berlin, 2016

This English edition © Polity Press, 2018

Polity Press

65 Bridge Street

Cambridge CB2 1UR, UK

Polity Press

101 Station Landing

Suite 300

Medford, MA 02155, USA

All rights reserved. Except for the quotation of short passages for the
purpose of criticism and review, no part of this publication may be
reproduced, stored in a retrieval system or transmitted, in any form or
by any means, electronic, mechanical, photocopying, recording or
otherwise, without the prior permission of the publisher.

P. 51, Brautigan, Richard: From "All Watched Over by Machines of Loving
Grace" by Richard Brautigan. Copyright © 1967 by Richard Brautigan,
renewed 1995 by Ianthe Brautigan Swenson. Reprinted with the permission
of the Estate of Richard Brautigan; all rights reserved.

ISBN-13: 978-1-5095-1959-0

ISBN-13: 978-1-5095-1960-6 (pb)

A catalogue record for this book is available from the British Library.

Library of Congress Cataloging-in-Publication Data

Names: Stalder, Felix, author.

Title: The digital condition / Felix Stalder.

Other titles: Kultur der Digitalitaet. English

Description: Cambridge, UK ; Medford, MA : Polity Press, \[2017\] \|
Includes bibliographical references and index.

Identifiers: LCCN 2017024678 (print) \| LCCN 2017037573 (ebook) \| ISBN
9781509519620 (Mobi) \| ISBN 9781509519637 (Epub) \| ISBN 9781509519590
(hardback) \| ISBN 9781509519606 (pbk.)

Subjects: LCSH: Digital communications--Social aspects. \| Information
society. \| Information society--Forecasting.

Classification: LCC HM851 (ebook) \| LCC HM851 .S728813 2017 (print) \|
DDC 302.23/1--dc23

LC record available at

Typeset in 10.5 on 12 pt Sabon

by Toppan Best-set Premedia Limited

Printed and bound in Great Britain by CPI Group (UK) Ltd, Croydon

The publisher has used its best endeavours to ensure that the URLs for
external websites referred to in this book are correct and active at the
time of going to press. However, the publisher has no responsibility for
the websites and can make no guarantee that a site will remain live or
that the content is or will remain appropriate.

Every effort has been made to trace all copyright holders, but if any
have been inadvertently overlooked the publisher will be pleased to
include any necessary credits in any subsequent reprint or edition.

For further information on Polity, visit our website:
politybooks.com[]{#Page_iv type="pagebreak" title="iv"}
:::


tactics in Thylstrup 2019


Thylstrup
The Politics of Mass Digitization
2019


The Politics of Mass Digitization

Nanna Bonde Thylstrup

The MIT Press

Cambridge, Massachusetts

London, England

# Table of Contents

1. Acknowledgments
2. I Framing Mass Digitization
1. 1 Understanding Mass Digitization
3. II Mapping Mass Digitization
1. 2 The Trials, Tribulations, and Transformations of Google Books
2. 3 Sovereign Soul Searching: The Politics of Europeana
3. 4 The Licit and Illicit Nature of Mass Digitization
4. III Diagnosing Mass Digitization
1. 5 Lost in Mass Digitization
2. 6 Concluding Remarks
5. References
6. Index

## List of figures

1. Figure 2.1 François-Marie Lefevere and Marin Saric. “Detection of grooves in scanned images.” U.S. Patent 7508978B1. Assigned to Google LLC.
2. Figure 2.2 Joseph K. O’Sullivan, Alexander Proudfooot, and Christopher R. Uhlik. “Pacing and error monitoring of manual page turning operator.” U.S. Patent 7619784B1. Assigned to Google LLC, Google Technology Holdings LLC.

#
Acknowledgments

I am very grateful to all those who have contributed to this book in various
ways. I owe special thanks to Bjarki Valtysson, Frederik Tygstrup, and Peter
Duelund, for their supervision and help thinking through this project, its
questions, and its forms. I also wish to thank Andrew Prescott, Tobias Olsson,
and Rune Gade for making my dissertation defense a memorable and thoroughly
enjoyable day of constructive critique and lively discussions. Important parts
of the research for this book further took place during three visiting stays
at Cornell University, Duke University, and Columbia University. I am very
grateful to N. Katherine Hayles, Andreas Huyssen, Timothy Brennan, Lydia
Goehr, Rodney Benson, and Fredric Jameson, who generously welcomed me across
the Atlantic and provided me with invaluable new perspectives, as well as
theoretical insights and challenges. Beyond the aforementioned, three people
in particular have been instrumental in terms of reading through drafts and in
providing constructive challenges, intellectual critique, moral support, and
fun times in equal proportions—thank you so much Kristin Veel, Henriette
Steiner, and Daniela Agostinho. Marianne Ping-Huang has further offered
invaluable support to this project and her theoretical and practical
engagement with digital archives and academic infrastructures continues to be
a source of inspiration. I am also immensely grateful to all the people
working on or with mass digitization who generously volunteered their time to
share with me their visions for, and perspectives on, mass digitization.

This book has further benefited greatly from dialogues taking place within the
framework of two larger research projects, which I have been fortunate enough
to be involved in: Uncertain Archives and The Past’s Future. I am very
grateful to all my colleagues in both these research projects: Kristin Veel,
Daniela Agostinho, Annie Ring, Katrine Dirkinck-Holmfeldt, Pepita Hesselberth,
Kristoffer Ørum, Ekaterina Kalinina Anders Søgaard as well as Helle Porsdam,
Jeppe Eimose, Stina Teilmann, John Naughton, Jeffrey Schnapp, Matthew Battles,
and Fiona McMillan. I am further indebted to La Vaughn Belle, George Tyson,
Temi Odumosu, Mathias Danbolt, Mette Kia, Lene Asp, Marie Blønd, Mace Ojala,
Renee Ridgway, and many others for our conversations on the ethical issues of
the mass digitization of colonial material. I have also benefitted from the
support and insights offered by other colleagues at the Department of Arts and
Cultural Studies, University of Copenhagen.

A big part of writing a book is also about keeping sane, and for this you need
great colleagues that can pull you out of your own circuit and launch you into
other realms of inquiry through collaboration, conversation, or just good
times. Thank you Mikkel Flyverbom, Rasmus Helles, Stine Lomborg, Helene
Ratner, Anders Koed Madsen, Ulrik Ekman, Solveig Gade, Anna Leander, Mareile
Kaufmann, Holger Schulze, Jakob Kreutzfeld, Jens Hauser, Nan Gerdes, Kerry
Greaves, Mikkel Thelle, Mads Rosendahl Thomsen, Knut Ove Eliassen, Jens-Erik
Mai, Rikke Frank Jørgensen, Klaus Bruhn Jensen, Marisa Cohn, Rachel Douglas-
Jones, Taina Bucher, and Baki Cakici. To this end you also need good
friends—thank you Thomas Lindquist Winther-Schmidt, Mira Jargil, Christian
Sønderby Jepsen, Agnete Sylvest, Louise Michaëlis, Jakob Westh, Gyrith Ravn,
Søren Porse, Jesper Værn, Jacob Thorsen, Maia Kahlke, Josephine Michau, Lærke
Vindahl, Chris Pedersen, Marianne Kiertzner, Rebecca Adler-Nissen, Stig
Helveg, Ida Vammen, Alejandro Savio, Lasse Folke Henriksen, Siine Jannsen,
Rens van Munster, Stephan Alsman, Sayuri Alsman, Henrik Moltke, Sean Treadway,
and many others. I also have to thank Christer and all the people at
Alimentari and CUB Coffee who kept my caffeine levels replenished when I tired
of the ivory tower.

I am furthermore very grateful for the wonderful guidance and support from MIT
Press, including Noah Springer, Marcy Ross, and Susan Clark—and of course for
the many inspiring conversations with and feedback from Doug Sery. I also want
to thank the anonymous peer reviewers whose insightful and constructive
comments helped improve this book immensely. Research for this book was
supported by grants from the Danish Research Council and the Velux Foundation.

Last, but not least, I wish to thank my loving partner Thomas Gammeltoft-
Hansen for his invaluable and critical input, optimistic outlook, and perfect
morning cappuccinos; my son Georg and daughter Liv for their general
awesomeness; and my extended family—Susanne, Bodil, and Hans—for their support
and encouragement.

I dedicate this book to my parents, Karen Lise Bonde Thylstrup and Asger
Thylstrup, without whom neither this book nor I would have materialized.

# I
Framing Mass Digitization

# 1
Understanding Mass Digitization

## Introduction

Mass digitization is first and foremost a professional concept. While it has
become a disciplinary buzzword used to describe large-scale digitization
projects of varying scope, it enjoys little circulation beyond the confines of
information science and such projects themselves. Yet, as this book argues, it
has also become a defining concept of our time. Indeed, it has even attained
the status of a cultural and moral imperative and obligation.1 Today, anyone
with an Internet connection can access hundreds of millions of digitized
cultural artifacts from the comfort of their desk—or many other locations—and
cultural institutions and private bodies add thousands of new cultural works
to the digital sphere every day. The practice of mass digitization is forming
new nexuses of knowledge, and new ways of engaging with that knowledge. What
at first glance appears to be a simple act of digitization (the transformation
of singular books from boundary objects to open sets of data), reveals, on
closer examination, a complex process teeming with diverse political, legal,
and cultural investments and controversies.

This volume asks why mass digitization has become such a “matter of concern,”2
and explores its implications for the politics of cultural memory. In
practical terms, mass digitization is digitization on an industrial scale. But
in cultural terms, mass digitization is much more than this. It is the promise
of heightened access to—and better preservation of—the past, and of more
original scholarship and better funding opportunities. It also promises
entirely new ways of reading, viewing, and structuring archives, new forms of
value and their extraction, and new infrastructures of control. This volume
argues that the shape-shifting quality of mass digitization, and its social
dynamics, alters the politics of cultural memory institutions. Two movements
simultaneously drive mass digitization programs: the relatively new phenomenon
of big data gold rushes, and the historically more familiar archival
accumulative imperative. Yet despite these prospects, mass digitization
projects are also uphill battles. They are costly and speculative processes,
with no guaranteed rate of return, and they are constantly faced by numerous
limitations and contestations on legal, social, and cultural levels.
Nevertheless, both public and private institutions adamantly emphasize the
need to digitize on a massive scale, motivating initiatives around the
globe—from China to Russia, Africa to Europe, South America to North America.
Some of these initiatives are bottom-up projects driven by highly motivated
individuals, while others are top-down and governed by complex bureaucratic
apparatuses. Some are backed by private money, others publically funded. Some
exist as actual archives, while others figure only as projections in policy
papers. As the ideal of mass digitization filters into different global
empirical situations, the concept of mass digitization attains nuanced
political hues. While all projects formally seek to serve the public interest,
they are in fact infused with much more diverse, and often conflicting,
political and commercial motives and dynamics. The same mass digitization
project can even be imbued with different and/or contradictory investments,
and can change purpose and function over time, sometimes rapidly.

Mass digitization projects are, then, highly political. But they are not
political in the sense that they transfer the politics of analog cultural
memory institutions into the digital sphere 1:1, or even liberate cultural
memory artifacts from the cultural politics of analog cultural memory
institutions. Rather, mass digitization presents a new political cultural
memory paradigm, one in which we see strands of technical and ideological
continuities combine with new ideals and opportunities; a political cultural
memory paradigm that is arguably even more complex—or at least appears more
messy to us now—than that of analog institutions, whose politics we have had
time to get used to. In order to grasp the political stakes of mass
digitization, therefore, we need to approach mass digitization projects not as
a continuation of the existing politics of cultural memory, or as purely
technical endeavors, but rather as emerging sociopolitical and sociotechnical
phenomena that introduce new forms of cultural memory politics.

## Framing, Mapping, and Diagnosing Mass Digitization

Interrogating the phenomenon of mass digitization, this book asks the question
of how mass digitization affects the politics of cultural memory institutions.
As a matter of practice, something is clearly changing in the conversion of
bounded—and scarce—historical material into ubiquitous ephemeral data. In
addition to the technical aspects of digitization, mass digitization is also
changing the political territory of cultural memory objects. Global commercial
platforms are increasingly administering and operating their scanning
activities in favor of the digital content they reap from the national “data
tombs” of museums and libraries and the feedback loops these generate. This
integration of commercial platforms into the otherwise primarily public
institutional set-up of cultural memory has produced a reconfiguration of the
political landscape of cultural memory from the traditional symbolic politics
of scarcity, sovereignty, and cultural capital to the late-sovereign
infrapolitics of standardization and subversion.

The empirical outlook of the present book is predominantly Western. Yet, the
overarching dynamics that have been pursued are far from limited to any one
region or continent, nor limited solely to the field of cultural memory.
Digitization is a global phenomenon and its reliance on late-sovereign
politics and subpolitical governance forms are shared across the globe.

The central argument of this book is that mass digitization heralds a new kind
of politics in the regime of cultural memory. Mass digitization of cultural
memory is neither a neutral technical process nor a transposition of the
politics of analog cultural heritage to the digital realm on a 1:1 scale. The
limitations of using conventional cultural-political frameworks for
understanding mass digitization projects become clear when working through the
concepts and regimes of mass digitization. Mass digitization brings together
so many disparate interests and elements that any mono-theoretical lens would
fail to account for the numerous political issues arising within the framework
of mass digitization. Rather, mass digitization should be approached as an
_infrapolitical_ process that brings together a multiplicity of interests
hitherto foreign to the realm of cultural memory.

The first part of the book, “framing,” outlines the theoretical arguments in
the book—that the political dynamics of mass digitization organize themselves
around the development of the technical infrastructures of mass digitization
in late-sovereign frameworks. Fusing infrastructure theory and theories on the
political dynamics of late sovereignty allows us to understand mass
digitization projects as cultural phenomena that are highly dependent on
standardization and globalization processes, while also recognizing that their
resultant infrapolitics can operate as forms of both control and subversion.

The second part of the book, “mapping,” offers an analysis of three different
mass digitization phenomena and how they relate to the late-sovereign politics
that gave rise to them. The part thus examines the historical foundation,
technical infrastructures, and (il)licit status and ideological underpinnings
of three variations of mass digitization projects: primarily corporate,
primarily public, and primarily private. While these variations may come
across as reproductions of more conventional societal structures, the chapters
in part two nevertheless also present us with a paradox: while the different
mass digitization projects that appear in this book—from Google’s privatized
endeavor to Europeana’s supranational politics to the unofficial initiatives
of shadow libraries—have different historical and cultural-political
trajectories and conventional regimes of governance, they also undermine these
conventional categories as they morph and merge into new infrastructures and
produce a new form of infrapolitics. The case studies featured in this book
are not to be taken as exhaustive examples, but rather as distinct, yet
nevertheless entangled, examples of how analog cultural memory is taken online
on a digital scale. They have been chosen with the aim of showing the
diversity of mass digitization, but also how it, as a phenomenon, ultimately
places the user in the dilemma of digital capitalism with its ethos of access,
speed, and participation (in varying degrees). The choices also have their
limitations, however. In their Western bias, which is partly rooted in this
author’s lack of language skills (specifically in Russian and Chinese), for
instance, they fail to capture the breadth and particularities of the
infrapolitics of mass digitization in other parts of the world. Much more
research is needed in this area.

The final part of the book, “diagnosing,” zooms in on the pathologies of mass
digitization in relation to affective questions of desire and uncertainty.
This part argues that instead of approaching mass digitization projects as
rationalized and instrumental projects, we should rather acknowledge them as
ambivalent spatio-temporal projects of desire and uncertainty. Indeed, as the
third part concludes, it is exactly uncertainty and desire that organizes the
new spatio-temporal infrastructures of cultural memory institutions, where
notions such as serendipity and the infrapolitics of platforms have taken
precedence over accuracy and sovereign institutional politics. The third part
thus calls into question arguments that imagine mass digitization as
instrumentalized projects that either undermine or produce values of
serendipity, as well as overarching narratives of how mass digitization
produces uncomplicated forms of individualized empowerment and freedom.
Instead, the chapter draws attention to the new cultural logics of platforms
that affect the cultural politics of mass digitization projects.

Crucially, then, this book seeks neither to condemn nor celebrate mass
digitization, but rather to unpack the phenomenon and anchor it in its
contemporary political reality. It offers a story of the ways in which mass
digitization produces new cultural memory institutions online that may be
entwined in the cultural politics of their analog origins, but also raises new
political questions to the collections.

## Setting the Stage: Assembling the Motley Crew of Mass Digitization

The dream and practice of mass digitizing cultural works has been around for
decades and, as this section attests, the projects vary significantly in
shape, size, and form. While rudimentary and nonexhaustive, this section
gathers a motley collection of mass digitization initiatives, from some of the
earliest digitization programs to later initiatives. The goal of this section
is thus not so much to meticulously map mass digitization programs, but rather
to provide examples of projects that might illuminate the purpose of this book
and its efforts to highlight the infrastructural politics of mass
digitization. As the section attests, mass digitization is anything but a
streamlined process. Rather, it is a painstakingly complex process mired in
legal, technical, personal, and political challenges and problems, and it is a
vision whose grand rhetoric often works to conceal its messy reality.

It is pertinent to note that mass digitization suffers from the combined
gendered and racialized reality of cultural institutions, tech corporations,
and infrastructural projects: save a few exceptions, there is precious little
diversity in the official map of mass digitization, even in those projects
that emerge bottom-up. This does not mean that women and minorities have not
formed a crucial part of mass digitization, selecting cultural objects,
prepping them (for instance ironing newspapers to ensure that they are flat),
scanning them, and constructing their digital infrastructures. However, more
often than not, their contributions fade into the background as tenders of the
infrastructures of mass digitization rather than as the (predominantly white,
male) “face” of mass digitization. As such, an important dimension of the
politics of these infrastructural projects is their reproduction of
established gendered and racialized infrastructures already present in both
cultural institutions and the tech industry.3 This book hints at these crucial
dimensions of mass digitization, but much more work is needed to change the
familiar cast of cultural memory institutions, both in the analog and digital
realms.

With these introductory remarks in place, let us now turn to the long and
winding road to mass digitization as we know it today. Locating the exact
origins of this road is a subjective task that often ends up trapping the
explorer in the mirror halls of technology. But it is worth noting that of
course there existed, before the Internet, numerous attempts at capturing and
remediating books in scalable forms, for the purposes both of preservation and
of extending the reach of library collections. One of the most revolutionary
of such technologies before the digital computer or the Internet was
microfilm, which was first held forth as a promising technology of
preservation and remediation in the middle of the 1800s.4 At the beginning of
the twentieth century, the Belgian author, entrepreneur, visionary, lawyer,
peace activist, and one of the founders of information science, Paul Otlet,
brought the possibilities of microfilm to bear directly on the world of
libraries. Otlet authored two influential think pieces that outlined the
benefits of microfilm as a stable and long-term remediation format that could,
ultimately, also be used to extend the reach of literature, just as he and his
collaborator, inventor and engineer Robert Goldschmidt, co-authored a work on
the new form of the book through microphotography, _Sur une forme nouvelle du
livre: le livre microphotographique_. 5 In his analyses, Otlet suggested that
the most important transformations would not take place in the book itself,
but in substitutes for it. Some years later, beginning in 1927 with the
Library of Congress microfilming more than three million pages of books and
manuscripts in the British Library, the remediation of cultural works in
microformat became a widespread practice across the world, and microfilm is
still in use to this day.6 Otlet did not confine himself to thinking only
about microphotography, however, but also pursued a more speculative vein,
inspired by contemporary experiments with electromagnetic waves, arguing that
the most radical change of the book would be wireless technology. Moreover, he
also envisioned and partly realized a physical space, _Mundaneum_ , for his
dreams of a universal archive. Paul Otlet and Nobel Peace Prize Winner Henri
La Fontaine conceived of Mundaneum in 1895 as part of their work on
documentation science. Otlet called the Mundaneum “… an Idea, an Institution,
a Method, a Body of work materials and collections, a Building, a Network.” In
more concrete, but no less ambitious terms, the Mundaneum was to gather
together all the world’s knowledge and classify it according to a universal
system they developed called the “Universal Decimal Classification.” In 1910,
Otlet and Fontaine found a place for their work in the Palais du
Cinquantenaire, a government building in Brussels. Later, Otlet commissioned
Le Corbusier to design a building for the Mundaneum in Geneva. The cooperation
ended unsuccesfully, however, and it later led a nomadic life, moving from The
Hague to Brussels and then in 1993 to the city of Mons in Belgium, where it
now exists as a museum called the Mundaneum Archive Center. Fatefully, Mons, a
former mining district, also houses Google’s largest data center in Europe and
it did not take Google long to recognize the cultural value in entering a
partnership with the Mundaneum, the two parties signing a contract in 2013.
The contract entailed among other things that Google would sponsor a traveling
exhibit on the Mundaneum, as well as a series of talks on Internet issues at
the museum and the university, and that the Mundaneum would use Google’s
social networking service, Google Plus, as a promotional tool. An article in
the _New York Times_ described the partnership as “part of a broader campaign
by Google to demonstrate that it is a friend of European culture, at a time
when its services are being investigated by regulators on a variety of
fronts.” 7 The collaboration not only spurred international interest, but also
inspired a group of influential tech activists and artists closely associated
with the creative work of shadow libraries to create the critical archival
project Mondotheque.be, a platform for “discussing and exploring the way
knowledge is managed and distributed today in a way that allows us to invent
other futures and different narrations of the past,”8 and a resulting digital
publication project, _The Radiated Book,_ authored by an assembly of
activists, artists, and scholars such as Femke Snelting, Tomislav Medak,
Dusan Barok, Geraldine Juarez, Shin Joung Yeo, and Matthew Fuller. 9

Another early precursor of mass digitization emerged with Project Gutenberg,
often referred to as the world’s oldest digital library. Project Gutenberg was
the brainchild of author Michael S. Hart, who in 1971, using technologies such
as ARPANET, Bulletin Board Systems (BSS), and Gopher protocols, experimented
with publishing and distributing books in digital form. As Hart reminisced in
his later text, “The History and Philosophy of Project Gutenberg,”10 Project
Gutenberg emerged out of a donation he received as an undergraduate in 1971,
which consisted of $100 million worth of computing time on the Xerox Sigma V
mainframe at the University of Illinois at Urbana-Champaign. Wanting to make
good use of the donation, Hart, in his own words, “announced that the greatest
value created by computers would not be computing, but would be the storage,
retrieval, and searching of what was stored in our libraries.”11 He therefore
committed himself to converting analog cultural works into digital text in a
format not only available to, but also accessible/readable to, almost all
computer systems: “Plain Vanilla ASCII” (ASCII for “American Standard Code for
Information Interchange”). While Project Gutenberg only converted about 50
works into digital text in the 1970s and the 1980s (the first was the
Declaration of Independence), it today hosts up to 56,000 texts in its
distinctly lo-fi manner.12 Interestingly, Michael S. Hart noted very early on
that the intention of the project was never to reproduce authoritative
editions of works for readers—“who cares whether a certain phrase in
Shakespeare has a ‘:’ or a ‘;’ between its clauses”—but rather to “release
etexts that are 99.9% accurate in the eyes of the general reader.”13 As the
present book attests, this early statement captures one of the central points
of contestation in mass digitization: the trade-off between accuracy and
accessibility, raising questions both of the limits of commercialized
accelerated digitization processes (see chapter 2 on Google Books) and of
class-based and postcolonial implications (see chapter 4 on shadow libraries).

If Project Gutenberg spearheaded the efforts of bringing cultural works into
the digital sphere through manual conversion of analog text into lo-fi digital
text, a French mass digitization project affiliated with the construction of
the Bibliothèque nationale de France (BnF) initiated in 1989 could be
considered one of the earliest examples of actually digitizing cultural works
on an industrial scale.14 The French were thus working on blueprints of mass
digitization programs before mass digitization became a widespread practice __
as part of the construction of a new national library, under the guidance of
Alain Giffard and initiated by François Mitterand. In a letter sent in 1990 to
Prime Minister Michel Rocard, President Mitterand outlined his vision of a
digital library, noting that “the novelty will be in the possibility of using
the most modern computer techniques for access to catalogs and documents of
the Bibliothèque nationale de France.”15 The project managed to digitize a
body of 70,000–80,000 titles, a sizeable amount of works for its time. As
Alain Giffard noted in hindsight, “the main difficulty for a digitization
program is to choose the books, and to choose the people to choose the
books.”16 Explaining in a conversation with me how he went about this task,
Giffard emphasized that he chose “not librarians but critics, researchers,
etc.” This choice, he underlined, could be made only because the digitization
program was “the last project of the president and a special mission” and thus
not formally a civil service program.17 The work process was thus as follows:

> I asked them to prepare a list. I told them, “Don’t think about what exists.
I ask of you a list of books that would be logical in this concept of a
library of France.” I had the first list and we showed it to the national
library, which was always fighting internally. So I told them, “I want this
book to be digitized.” But they would never give it to us because of
territory. Their ship was not my ship. So I said to them, “If you don’t give
me the books I shall buy the books.” They said I could never buy them, but
then I started buying the books from antiques suppliers because I earned a lot
of money at that time. So in the end I had a lot of books. And I said to them,
“If you want the books digitized you must give me the books.” But of the
80,000 books that were digitized, half were not in the collection. I used the
staff’s garages for the books, 80,000 books. It is an incredible story.18

Incredible indeed. And a wonderful anecdote that makes clear that mass
digitization, rather than being just a technical challenge, is also a
politically contingent process that raises fundamental questions of territory
(institutional as well as national), materiality, and culture. The integration
of the digital _très grande bibliothèque_ into the French national mass
digitization project Gallica, later in 1997, also foregrounds the
infrastructural trajectory of early national digitization programs into later
glocal initiatives. 19

The question of pan-national digitization programs was precisely at the
forefront of another early prominent mass digitization project, namely the
Universal Digital Library (UDL), which was launched in 1995 by Carnegie Mellon
computer scientist Raj Reddy and developed by linguist Jaime Carbonell,
physicist Michael Shamos, and Carnegie Mellon Foundation dean of libraries
Gloriana St. Clair. In 1998, the project launched the Thousand Book Project.
Later, the UDL scaled its initial efforts up to the Million Book Project,
which they successfully completed in 2007.20 Organizationally, the UDL stood
out from many of the other digitization projects by including initial
participation from three non-Western entities in addition to the Carnegie
Mellon Foundation—the governments of India, China, and Egypt.21 Indeed, India
and China invested about $10 million in the initial phase, employing several
hundred people to find books, bring them in, and take them back. While the
project ambitiously aimed to provide access “to all human knowledge, anytime,
anywhere,” it ended its scanning activities 2008. As such, the Universal
Digital Library points to another central infrastructural dimension of mass
digitization: its highly contingent spatio-temporal configurations that are
often posed in direct contradistinction to the universalizing discourse of
mass digitization. Across the board, mass digitization projects, while
confining themselves in practice to a limited target of how many books they
will digitize, employ a discourse of universality, perhaps alluding vaguely to
how long such an endeavor will take but in highly uncertain terms (see
chapters 3 and 5 in particular).

No exception from the universalizing discourse, another highly significant
mass digitization project, the Internet Archive, emerged around the same time
as the Universal Digital Library. The Internet Archive was founded by open
access activist and computer engineer Brewster Kahle in 1996, and although it
was primarily oriented toward preserving born-digital material, in particular
the Internet ( _Wired_ calls Brewster Kahle “the Internet’s de facto
librarian” 22), the Archive also began digitizing books in 2005, supported by
a grant from the Alfred Sloan Foundation. Later that year, the Internet
Archive created the infrastructural initiative, Open Content Alliance (OCA),
and was now embedded in an infrastructure that included over 30 major US
libraries, as well as major search engines (by Yahoo! and Microsoft),
technology companies (Adobe and Xerox), a commercial publisher (O’Reilly
Media, Inc.), and a not-for-profit membership organization of more than 150
institutions, including universities, research libraries, archives, museums,
and historical societies.23 The Internet Archive’s mass digitization
infrastructure was thus from the beginning a mesh of public and private
cooperation, where libraries made their collections available to the Alliance
for scanning, and corporate sponsors or the Internet Archive conversely funded
the digitization processes. As such, the infrastructures of the Internet
Archive and Google Books were rather similar in their set-ups.24 Nevertheless,
the initiative of the Internet Archive’s mass digitization project and its
attendant infrastructural alliance, OCA, should be read as both a technical
infrastructure responding to the question of _how_ to mass digitize in
technical terms, and as an infrapolitical reaction in response to the forces
of the commercial world that were beginning to gather around mass
digitization, such as Amazon 25 and Google. The Internet Archive thus
positioned itself as a transparent open source alternative to the closed doors
of corporate and commercial initiatives. Yet, as Kalev Leetaru notes, the case
was more complex than that. Indeed, while the OCA was often foregrounded as
more transparent than Google, their technical infrastructural components and
practices were in fact often just as shrouded in secrecy.26 As such, the
Internet Archive and the OCA draw attention to the important infrapolitical
question in mass digitization, namely how, why, and when to manage
visibilities in mass digitization projects.

Although the media sometimes picked up stories on mass digitization projects
already outlined, it wasn’t until Google entered the scene that mass
digitization became a headline-grabbing enterprise. In 2004, Google founders
Larry Page and Sergey Brin traveled to Frankfurt to make a rare appearance at
the Frankfurt Book Fair. Google was at that time still considered a “scrappy”
Internet company in some quarters, as compared with tech giants such as
Microsoft.27 Yet Page and Brin went to Frankfurt to deliver a monumental
announcement: Google would launch a ten-year plan to make available
approximately 15 million digitized books, both in- and out-of-copyright
works.28 They baptized the program “Google Print,” a project that consisted of
a series of partnerships between Google and five English-language libraries:
the University of Michigan at Ann Arbor, Stanford, Harvard, Oxford (Bodleian
Library), and the New York City Public Library. While Page’s and Brin’s
announcement was surprising to some, many had anticipated it; as already
noted, advances toward mass digitization proper had already been made, and
some of the partnership institutions had been negotiating with Google since
2002.29 As with many of the previous mass digitization projects, Google found
inspiration for their digitization project in the long-lived utopian ideal of
the universal library, and in particular the mythic library of Alexandria.30
As with other Google endeavors, it seemed that Page was intent on realizing a
utopian ideal that scholars (and others) had long dreamed of: a library
containing everything ever written. It would be realized, however, not with
traditional human-centered means drawn from the world of libraries, but rather
with an AI approach. Google Books would exceed human constraints, taking the
seemingly impossible vision of digitizing all the books in the world as a
starting point for constructing an omniscient Artificial Intelligence that
would know the entire human symbol system and allow flexible and intuitive
recollection. These constraints were physical (how to digitize and organize
all this knowledge in physical form); legal (how to do it in a way that
suspends existing regulation); and political (how to transgress territorial
systems). The invocation of the notion of the universal library was not a
neutral action. Rather, the image of Google Books as a library worked as a
symbolic form in a cultural scheme that situated Google as a utopian, and even
ethical, idealist project. Google Books seemingly existed by virtue of
Goethe’s famous maxim that “To live in the ideal world is to treat the
impossible as if it were possible.”31 At the time, the industry magazine
_Bookseller_ wrote in response to Google’s digitization plans: “The prospect
is both thrilling and frightening for the book industry, raising a host of
technical and theoretical issues.” 32 And indeed, while some reacted with
enthusiasm and relief to the prospect of an organization being willing to
suffer the cost of mass digitization, others expressed economic and ethical
concerns. The Authors Guild, a New York–based association, promptly filed a
copyright infringement suit against Google. And librarians were forced to
revisit core ethical principles such as privacy and public access.

The controversies of Google Books initially played out only in US territory.
However, another set of concerns of a more territorial and political nature
soon came to light. The French President at the time, Jacques Chirac, called
France to cultural-political arms, urging his culture minister, Renaud
Donnedieu de Vabres, and Jean-Noël Jeanneney, then-head of France’s
Bibliothèque nationale, to do the same with French texts as Google planned to
do with their partner libraries, but by means of a French search engine.33
Jeanneney initially framed this French cultural-political endeavor as a
European “contre-attaque” against Google Books, which, according to Jeanneney,
could pose “une domination écrasante de l'Amérique dans la définition de
l'idée que les prochaines générations se feront du monde.” (“a crushing
American domination of the formation of future generations’ ideas about the
world”)34 Other French officials insisted that the French digitization project
should be seen not primarily as a cultural-political reaction _against_
Google, but rather as a cultural-political incentive within France and Europe
to make European information available online. “I really stress that it's not
anti-American,” an official at France’s Ministry of Culture and Communication,
speaking on the condition of anonymity, noted in an interview. “It is not a
reaction. The objective is to make more material relevant to European heritage
available. … Everybody is working on digitization projects.” Furthermore, the
official did not rule out potential cooperation between Google and the
European project. 35 There was no doubt, however, that the move to mass
digitization “was a political drive by the French,” as Stephen Bury, head of
European and American collections at the British Library, emphasized.36

Despite its mixed messages, the French reaction nevertheless underscored the
controversial nature of mass digitization as a symbolic, as well as technical,
aspiration: mass digitization was a process that not only neutrally scanned
and represented books but could also produce a new mode of world-making,
actively structuring archives as well as their users.37 Now questions began to
surface about where, or with whom, to place governance over this new archive:
who would be the custodian of the keys to this new library? And who would be
the librarians? A series of related questions could also be asked: who would
determine the archival limits, the relations between the secret and the non-
secret or the private and the public, and whether these might involve property
or access rights, publication or reproduction rights, classification, and
putting into order? France soon managed to rally other EU countries (Spain,
Poland, Hungary, Italy, and Germany) to back its recommendation to the
European Commission (EC) to construct a European alternative to Google’s
search engine and archive and to set this out in writing. Occasioned by the
French recommendation, the EC promptly adopted the idea of Europeana—the name
of the proposed alternative—as a “flagship project” for the budding EU
cultural policy.38 Soon after, in 2008, the EC launched Europeana, giving
access to some 4.5 million digital objects from more than 1,000 institutions.

Europeana’s Europeanizing discourse presents a territorializing approach to
mass digitization that stands in contrast to the more universalizing tone of
Mundaneum, Gutenberg, Google Books, and the Universal Digital Library. As
such, it ties in with our final examples, namely the sovereign mass
digitization projects that have in fact always been one of the primary drivers
in mass digitization efforts. To this day, the map of mass digitization is
populated with sovereign mass digitization efforts from Holland and Norway to
France and the United States. One of the most impressive projects is the
Norwegian mass digitization project at the National Library of Norway, which
since 2004 has worked systematically to develop a digital National Library
that encompasses text, audio, video, image, and websites. Impressively, the
National Library of Norway offers digital library services that provide online
access (to all with a Norwegian IP address) to full-text versions of all books
published in Norway up until the year 2001, access to digital newspaper
collections from the major national and regional newspapers in all libraries
in the country, and opportunities for everyone with Internet access to search
and listen to more than 40,000 radio programs recorded between 1933 and the
present day.39 Another ambitious national mass digitization project is the
Dutch National Library’s effort to digitize all printed publications since
1470 and to create a National Platform for Digital Publications, which is to
act both as a content delivery platform for its mass digitization output and
as a national aggregator for publications. To this end, the Dutch National
Library made deals with Google Books and Proquest to digitize 42 million pages
just as it entered into partnerships with cross-domain aggregators such as
Europeana.40 Finally, it is imperative to mention the Digital Public Library
of America (DPLA), a national digital library conceived of in 2010 and
launched in 2013, which aggregates digital collections of metadata from around
the United States, pulling in content from large institutions like the
National Archives and Records Administration and HathiTrust, as well as from
smaller archives. The DPLA is in great part the fruit of the intellectual work
of Harvard University’s Berkman Center for Internet and Society and the work
of its Steering Committee, which consisted of influential names from the
digital, legal, and library worlds, such as Robert Darnton, Maura Marx, and
John Palfrey from Harvard University; Paul Courant of the University of
Michigan; Carla Hayden, then of Baltimore’s Enoch Pratt Free Library and
subsequently the Librarian of Congress; Brewster Kahle; Jerome McGann; Amy
Ryan of the Boston Public Library; and Doron Weber of the Sloan Foundation.
Key figures in the DPLA have often to great rhetorical effect positioned DPLA
vis-à-vis Google Books, partly as a question of public versus private
infrastructures.41 Yet, as the then-Chairman of DPLA John Palfrey conceded,
the question of what constitutes “public” in a mass digitization context
remains a critical issue: “The Digital Public Library of America has its
critics. One counterargument is that investments in digital infrastructures at
scale will undermine support for the traditional and the local. As the
chairman of the DPLA, I hear this critique in the question-and-answer period
of nearly every presentation I give. … The concern is that support for the
DPLA will undercut already eroding support for small, local public
libraries.”42 While Palfrey offers good arguments for why the DPLA could
easily work in unison with, rather than jeopardize, smaller public libraries,
and while the DPLA is building infrastructures to support this claim,43 the
discussion nevertheless highlights the difficulties with determining when
something is “public,” and even national.

While the highly publicized and institutionalized projects I have just
recounted have taken center stage in the early and later years of mass
digitization, they neither constitute the full cast, nor the whole machinery,
of mass digitization assemblages. Indeed, as chapter 4 in this book charts, at
the margins of mass digitization another set of actors have been at work
building new digital cultural memory assemblages, including projects such as
Monoskop and Lib.ru. These actors, referred to in this book as shadow library
projects (see chapter 4), at once both challenge and confirm the broader
infrapolitical dimensions of mass digitization, including its logics of
digital capitalism, network power, and territorial reconfigurations of
cultural memory between universalizing and glocalizing discourses. Within this
new “ecosystem of access,” unauthorized archives as Libgen, Gigapedia, and
Sci-Hub have successfully built “shadow libraries” with global reach,
containing massive aggregations of downloadable text material of both
scholarly and fictional character.44 As chapter 4 shows, these initiatives
further challenge our notions of public good, licit and illicit mass
digitization, and the territorial borders of mass digitization, just as they
add another layer of complexity to the question of the politics of mass
digitization.

Today, then, the landscape of mass digitization has evolved considerably, and
we can now begin to make out the political contours that have shaped, and
continue to shape, the emergent contemporary knowledge infrastructures of mass
digitization, ripe as they are with contestation, cooperation, and
competition. From this perspective, mass digitization appears as a preeminent
example of how knowledge politics are configured in today’s world of
“assemblages” as “multisited, transboundary networks” that connect
subnational, national, supranational, and global infrastructures and actors,
without, however, necessarily doing so through formal interstate systems.45 We
can also see that mass digitization projects did not arise as a result of a
sovereign decision, but rather emerged through a series of contingencies
shaped by late-capitalist and late-sovereign forces. Furthermore, mass
digitization presents us with an entirely new cultural memory paradigm—a
paradigm that requires a shift in thinking about cultural works, collections,
and contexts, from cultural records to be preserved and read by humans, to
ephemeral machine-readable entities. This change requires a shift in thinking
about the economy of cultural works, collections, and contexts, from scarce
institutional objects to ubiquitous flexible information. Finally, it requires
a shift in thinking about these same issues as belonging to national-global
domains to conceiving them in terms of a set of political processes that may
well be placed in national settings, but are oriented toward global agendas
and systems.

## Interrogating Mass Digitization

Mass digitization is often elastic in definition and elusive in practice.
Concrete attempts have been made to delimit what mass digitization is, but
these rarely go into specifics. The two characteristics most commonly
associated with mass digitization are the relative lack of selectivity of
materials, as compared to smaller-scale digitization projects, and the high
speed and high volume of the process in terms of both digital conversion and
metadata creation, which are made possible through a high level of
automation.46 Mass digitization is thus concerned not only with preservation,
but also with what kind of knowledge practices and values technology allows
for and encourages, for example, in relation to de- and recontextualization,
automation, and scale.47

Studies of mass digitization are commonly oriented toward technology or
information policy issues close to libraries, such as copyright, the quality
of digital imagery, long-term preservation responsibility, standards and
interoperability, and economic models for libraries, publishers, and
booksellers, rather than, as here, the exploration of theory.48 This is not to
say that existing work on mass digitization is not informed by theoretical
considerations, but rather that the majority of research emphasizes policy and
technical implementation at the expense of a more fundamental understanding of
the cultural implications of mass digitization. In part, the reason for this
is the relative novelty of mass digitization as an identifiable field of
practice and policy, and its significant ramifications in the fields of law
and information science.49 In addition to scholarly elucidations, mass
digitization has also given rise to more ideologically fuelled critical books
and articles on the topic.50

Despite its disciplinary branching, work on mass digitization has mainly taken
place in the fields of information science, law, and computer science, and has
primarily problematized the “hows” of mass digitization and not the “whys.”51
As with technical work on mass digitization, most nontechnical studies of mass
digitization are “problem-solving” rather than “critical,” and this applies in
particular to work originating from within the policy analysis community. This
body seeks to solve problems within the existing social order—for example,
copyright or metadata—rather than to interrogate the assumptions that underlie
mass digitization programs, which would include asking what kinds of knowledge
production mass digitization gives rise to. How does mass digitization change
the ideological infrastructures of cultural heritage institutions? And from
what political context does the urge to digitize on an industrial scale
emerge? While the technical and problem-solving corpus on mass digitization is
highly valuable in terms of outlining the most important stakeholders and
technical issues of the field, it does not provide insight into the deeper
structures, social mechanisms, and political implications of mass
digitization. Moreover, it often fails to account for digitization as a force
that is deeply entwined with other dynamics that shape its development and
uses. It is this lack that the present volume seeks to mitigate.

## Assembling Mass Digitization

Mass digitization is a composite and fluctuating infrastructure of
disciplines, interests, and forces rooted in public-private assemblages,
driven by ideas of value extraction and distribution, and supported by new
forms of social organization. Google Books, for instance, is both a commercial
project covered by nondisclosure agreements _and_ an academic scholarly
project open for all to see. Similarly, Europeana is both a public
digitization project directed at “citizens” _and_ a public-private partnership
enterprise ripe with profit motives. Nevertheless, while it is tempting to
speak about specific mass digitization projects such as Google Books and
Europeana in monolithic and contrastive terms, mass digitization projects are
anything but tightly organized, institutionally delineated, coherent wholes
that produce one dominant reading. We do not find one “essence” in mass
digitized archives. They are not “enlightenment projects,” “library services,”
“software applications,” “interfaces,” or “corporations.” Nor are they rooted
in one central location or single ideology. Rather, mass digitization is a
complex material and social infrastructure performed by a diverse
constellation of cultural memory professionals, computer scientists,
information specialists, policy personnel, politicians, scanners, and
scholars. Hence, this volume approaches mass digitization projects as
“assemblages,” that is, as contingent arrangements consisting of humans,
machines, objects, subjects, spaces and places, habits, norms, laws, politics,
and so on. These arrangements cross national-global and public-private lines,
producing what this volume calls “late-sovereign,” “posthuman,” and “late-
capitalist” assemblages.

To give an example, we can look at how the national and global aspects of
cultural memory institutions change with mass digitization. The national
museums and libraries we frequent today were largely erected during eras of
high nationalism, as supreme acts of cultural and national territoriality.
“The early establishment of a national collection,” as Belinda Tiffen notes,
“was an important step in the birth of the new nation,” since it signified
“the legitimacy of the nation as a political and cultural entity with its own
heritage and culture worthy of being recorded and preserved.”52 Today, as the
initial French incentive to build Europeana shows, we find similar
nationalization processes in mass digitization projects. However,
nationalizing a digital collection often remains a performative gesture than a
practical feat, partly because the information environment in the digital
sphere differs significantly from that of the analog world in terms of
territory and materiality, and partly because the dichotomy between national
and global, an agreed-upon construction for centuries, is becoming more and
more difficult to uphold in theory and practice.53 Thus, both Google Books and
Europeana link to sovereign frameworks such as citizens and national
representation, while also undermining them with late-capitalist transnational
economic agreements.

A related example is the posthuman aspect of cultural memory politics.
Cultural memory artifacts have always been thought of as profoundly human
collections, in the sense that they were created by and for human minds and
human meaning-making. Previously, humans also organized collections. But with
the invention of computers, most cultural memory institutions also introduced
a machine element to the management of accelerating amounts of information,
such as computerized catalog systems and recollection systems. With the advent
of mass digitization, machines have gained a whole new role in the cultural
memory ecosystem, not only as managers, but also as interpreters. Thus,
collections are increasingly digitized to be read by machines instead of
humans, just as metadata is now becoming a question of machine analysis rather
than of human contextualization. Machines are taking on more and more tasks in
the realm of cultural memory that require a substantial amount of cognitive
insight (just as mass digitization has created the need for new robot-like,
and often poorly paid, human tasks, such as the monotonous work of book
scanning). Mass digitization has thereby given rise to an entirely new
cultural-legal category titled “non-consumptive research,” a term used to
describe the large-scale analysis of texts, and which has been formalized by
the Google Books Settlement, for instance, in the following way: “research in
which computational analysis is performed on one or more books, but not
research in which a researcher reads or displays.”54

Lastly, mass digitization connects the politics of cultural memory to
transnational late capitalism, and to one of its expressions in particular:
digital capitalism.55 Of course, cultural memory collections have a long
history with capitalism. The nineteenth century held very fuzzy boundaries
between the cultural functions of libraries and the commercial interests that
surrounded them, and, as historian of libraries Francis Miksa notes, Melvin
Dewey, inventor of the Dewey Decimal System, was a great admirer of the
corporate ideal, and was eager to apply it to the library system.56 Indeed,
library development in the United States was greatly advanced by the
philanthropy of capitalism, most notably by Andrew Carnegie.57 The question,
then, is not so much whether mass digitization has brought cultural memory
institutions, and their collections and users, into a capitalist system, but
_what kind_ of capitalist system mass digitization has introduced cultural
memory to: digital capitalism.

Today, elements of the politics of cultural memory are being reassembled into
novel knowledge configurations. As a consequence, their connections and
conjugations are being transformed, as are their institutional embeddings.
Indeed, mass digitization assemblages are a product of our time. They are new
forms of knowledge institutions arising from a sociopolitical environment
where vertical territorial hierarchies and horizontal networks entwine in a
new political mesh: where solid things melt into air, and clouds materialize
as material infrastructures, where boundaries between experts and laypeople
disintegrate, and where machine cognition operates on a par with human
cognition on an increasingly large scale. These assemblages enable new types
of political actors—networked assemblages—which hold particular forms of power
despite their informality vis-à-vis the formal political system; and in turn,
through their practices, these actors partly build and shape those
assemblages.

Since concepts always respond to “a specific social and historical situation
of which an intellectual occasion is part,”58 it is instructive to revisit the
1980s, when the theoretical notion of assemblage emerged and slowly gained
cross-disciplinary purchase.59 Around this time, the stable structures of
modernist institutions began to give ground to postmodern forces: sovereign
systems entered into supra-, trans-, and international structures,
“globalization” became a buzzword, and privatizing initiatives drove wedges
into the foundations of state structures. The centralized power exercised by
disciplinary institutions was increasingly distributed along more and more
lines, weakening the walls of circumscribed centralized authority.60 This
disciplinary decomposition took place on all levels and across all fields of
society, including institutional cultural memory containers such as libraries
and museums. The forces of privatization, globalization, and digitization put
pressures not only on the authority of these institutions but also on a host
of related authoritative cultural memory elements, such as “librarians,”
“cultural works,” and “taxonomies,” and cultural memory practices such as
“curating,” “reading,” and “ownership.” Librarians were “disintermediated” by
technology, cultural works fragmented into flexible data, and curatorial
principles were revised and restructured just as reading was now beginning to
take place in front of screens, meaning-making to be performed by machines,
and ownership of works to be substituted by contractual renewals.

Thinking about mass digitization as an “assemblage” allows us to abandon the
image of a circumscribed entity in favor of approaching it as an aggregate of
many highly varied components and their contingent connections: scanners,
servers, reading devices, cables, algorithms; national, EU, and US
policymakers; corporate CEOs and employees; cultural heritage professionals
and laypeople; software developers, engineers, lobby organizations, and
unsalaried labor; legal settlements, academic conferences, position papers,
and so on. It gives us pause—every time we say “Google” or “Europeana,” we
might reflect on what we actually mean. Does the researcher employed by a
university library and working with Google Books also belong to Google Books?
Do the underpaid scanners? Do the users of Google? Or, when we refer to Google
Books, do we rather only mean to include the founders and CEOs of Google? Or
has Google in fact become a metaphor that expresses certain characteristics of
our time? The present volume suggests that all these components enter into the
new phenomenon of mass digitization and produce a new field of potentiality,
while at the same time they retain their original qualities and value systems,
at least to some extent. No assemblage is whole and imperturbable, nor
entirely reducible to its parts, but is simultaneously an accumulation of
smaller assemblages and a member of larger ones.61 Thus Google Books, for
example, is both an aggregation of smaller assemblages such as university
libraries, scanners (both humans and machines), and books, _and_ a member of
larger assemblages such as Google, Silicon Valley, neoliberal lobbies, and the
Internet, to name but a few.

While representations of assemblages such as the analyses performed in this
volume are always doomed to misrepresent empirical reality on some level, this
approach nevertheless provides a tool for grasping at least some of mass
digitization’s internal heterogeneity, and the mechanisms and processes that
enable each project’s continued assembled existence. The concept of the
assemblage allows us to grasp mass digitization as comprised of ephemeral
projects that are uncertain by nature, and sometimes even made up of
contradictory components.62 It also allows us to recognize that they are more
than mere networks: while ephemeral and networked, something enables them to
cohere. Bruno Latour writes, “Groups are not silent things, but rather the
provisional product of a constant uproar made by the millions of contradictory
voices about what is a group and who pertains to what.”63 It is the “taming
and constraining of this multivocality,” in particular by communities of
knowledge and everyday practices, that enables something like mass
digitization to cohere as an assemblage.64 This book is, among other things,
about those communities and practices, and the politics they produce and are
produced by. In particular, it addresses the politics of mass digitization as
an infrapolitical activity that retreats into, and emanates from, digital
infrastructures and the network effects they produce.

## Politics in Mass Digitization: Infrastructure and Infrapolitics

If the concept of “assemblage” allows us to see the relational set-up of mass
digitization, it also allows us to inquire into its political infrastructures.
In political terms, assemblage thinking is partly driven by dissatisfaction
with state-centric dominant ontologies, including reified units such as state,
society, or capitalism, and the unilinear focus on state-centric politics over
other forms of politics.65 The assemblage perspective is therefore especially
useful for understanding the politics of late-sovereign and late-capitalist
data projects such as mass digitization. As we will see in part 2, the
epistemic frame of sovereignty continues to offer an organizing frame for the
constitution and regulation of mass digitization and the virtues associated
with it (such as national representation and citizen engagement). However, at
the same time, mass digitization projects are in direct correspondence with
neoliberal values such as privatization, consumerism, globalization, and
acceleration, and its technological features allow for a complete
restructuring of the disciplinary spaces of libraries to form vaster and even
global scales of integration and economic organization on a multinational
stage.

Mass digitization is a concrete example of what cultural memory projects look
like in a “late-sovereign” age, where globalization tests the political and
symbolic authority of sovereign cultural memory politics to its limits, while
sovereignty as an epistemic organizing principle for the politics of cultural
memory nonetheless persists.66 The politics of cultural memory, in particular
those practiced by cultural heritage institutions, often still cling to fixed
sovereign taxonomies and epistemic frameworks. This focus is partly determined
by their institutional anchoring in the framework of national cultural
policies. In mass digitization, however, the formal political apparatus of
cultural heritage institutions is adjoined by a politics that plays out in the
margins: in lobbies, software industries, universities, social media, etc.
Those evaluating mass digitization assemblages in macropolitical terms, that
is, those who are concerned with political categories, will glean little of
the real politics of mass digitization, since such politics at the margins
would escape this analytic matrix.67 Assemblage thinking, by contrast, allows
us to acknowledge the political mechanisms of mass digitization beyond
disciplinary regulatory models, in societies where “where forces … not
categories, clash.”68

As Ian Hacking and many others have noted, the capacious usage of the notion
of “politics” threatens to strip the word of meaning.69 But talk of a politics
of mass digitization is no conceptual gimmick, since what is taking place in
the construction and practice of mass digitization assemblages plainly is
political. The question, then, is how best to describe the politics at work in
mass digitization assemblages. The answer advanced by the present volume is to
think of the politics of mass digitization as “infrapolitics.”

The notion of infrapolitics has until now primarily and profoundly been
advanced as a concept of hidden dissent or contestation (Scott, 1990).70 This
volume suggests shifting the lens to focus on a different kind of
infrapolitics, however, one that not only takes the shape of resistance but
also of maintenance and conformity, since the story of mass digitization is
both the story of contestation _and_ the politics of mundane and standard-
seeking practices. 71 The infrapolitics of mass digitization is, then, a kind
of politics “premised not on a subject, but on the infra,” that is, the
“underlying rules of the world,” organized around glocal infrastructures.72
The infrapolitics of mass digitization is the building and living of
infrastructures, both as spaces of contestation and processes of
naturalization.

Geoffrey Bowker and Susan Leigh Star have argued that the establishment of
standards, categories, and infrastructures “should be recognized as the
significant site of political and ethical work that they are.”73 This applies
not least in the construction and development of knowledge infrastructures
such as mass digitization assemblages, structures that are upheld by
increasingly complex sets of protocols and standards. Attaching “politics” to
“infrastructure” endows the term—and hence mass digitization under this
rubric—with a distinct organizational form that connects various stages and
levels of politics, as well as a distinct temporality that relates mass
digitization to the forces and ideas of industrialization and globalization.

The notion of infrastructure has a surprisingly brief etymology. It first
entered the French language in 1875 in relation to the excavation of
railways.74 Over the following decades, it primarily designated fixed
installations designed to facilitate and foster mobility. It did not enter
English vocabulary until 1927, and as late as 1951, the word was still
described by English sources as “new” (OED).75 When NATO adopted the term in
the 1950s, it gained a military tinge. Since then, “infrastructure” has
proliferated into ever more contexts and disciplines, becoming a “plastic
word”76 often used to signify any vital and widely shared human-constructed
resource.77

What makes infrastructures central for understanding the politics of mass
digitization? Primarily, they are crucial to understanding how industrialism
has affected the ways in which we organize and engage with knowledge, but the
politics of infrastructures are also becoming increasingly significant in the
late-sovereign, late-capitalist landscape.

The infrastructures of mass digitization mediate, combine, connect, and
converge upon different institutions, social networks, and devices, augmenting
the actors that take part in them with new agential possibilities by expanding
the radius of their action, strengthening and prolonging the reach of their
performance, and setting them free for other activities through their
accelerating effects, time often reinvested in other infrastructures, such as,
for instance, social media activities. The infrastructures of mass
digitization also increase the demand for globalization and mobility, since
they expand the radius of using/reading/working.

The infrastructures of mass digitization are thus media of polities and
politics, at times visible and at others barely legible or felt, and home both
to dissent as well as to standardizing measures. These include legal
infrastructures such as copyright, privacy, and trade law; material
infrastructures such as books, wires, scanners, screens, server parks, and
shelving systems; disciplinary infrastructures such as metadata, knowledge
organization, and standards; cultural infrastructures such as algorithms,
searching, reading, and downloading; societal infrastructures such as the
realms of the public and private, national and global. These infrastructures
are, depending, both the prerequisites for and the results of interactions
between the spatial, temporal, and social classes that take part in the
construction of mass digitization. The infrapolitics of mass digitization is
thus geared toward both interoperability and standardization, as well as
toward variation.78

Often when thinking of infrastructures, we conceive of them in terms of
durability and stability. Yet, while some infrastructures, such as railways
and Internet cables, are fairly solid and rigid constructions, others—such as
semantic links, time-limited contracts, and research projects—are more
contingent entities which operate not as “fully coherent, deliberately
engineered, end-to-end processes,” but rather as morphous contingent
assemblages, as “ecologies or complex adaptive systems” consisting of
“numerous systems, each with unique origins and goals, which are made to
interoperate by means of standards, socket layers, social practices, norms,
and individual behaviors that smooth out the connections among them.”79 This
contingency has direct implications for infrapolitics, which become equally
flexible and adaptive. These characteristics endow mass digitization
infrastructures with vulnerabilities but also with tremendous cultural power,
allowing them to distribute agency, and to create and facilitate new forms of
sociality and culture.

Building mass digitization infrastructures is a costly endeavor, and hence
mass digitization infrastructures are often backed by public-private
partnerships. Indeed infrastructures—and mass digitization infrastructures are
no exceptions—are often so costly that a certain mixture of political or
individual megalomania, state reach, and private capital is present in their
construction.80 This mixed foundation means that a lot of the political
decisions regarding mass digitization literally take place _beneath_ the radar
of “the representative institutions of the political system of nation-states,”
while also more or less aggressively filling out “gaps” in nation-state
systems, and even creating transnational zones with their own policies. 81
Hence the notion of “infra”: the infrapolitics of mass digitization hover at a
frequency that lies _below_ and beyond formal sovereign state apparatus,
organized, as they are, around glocal—and often private or privatized—material
and social infrastructures.

While distinct from the formalized sovereign political system, infrapolitical
assemblages nevertheless often perform as late-sovereign actors by engaging in
various forms of “sovereignty games.”82 Take Google, for instance, a private
corporation that often defines itself as at odds with state practice, yet also
often more or less informally meets with state leaders, engages in diplomatic
discussions, and enters into agreements with state agencies and local
political councils. The infrapolitical forces of Google in these sovereignty
games can on the one hand exert political pressure on states—for instance in
the name of civic freedom—but in Google’s embrace of politics, its
infrapolitical forces can on the other hand also squeeze the life out of
existing parliamentary ways, promoting instead various forms of apolitical or
libertarian modes of life. The infrapolitical apparatus thus stands apart from
more formalized politics, not only in terms of political arena, but also the
constraints that are placed upon them in the form, for instance, of public
accountability.83 What is described here can in general terms be called the
infrapolitics of neoliberalism, whose scenery consists of lobby rooms, policy-
making headquarters, financial zones, public-private spheres, and is populated
by lobbyists, bureaucrats, lawyers, and CEOs.

But the infrapolitical dynamics of mass digitization also operate in more
mundane and less obvious settings, such as software design offices and
standardization agencies, and are enacted by engineers, statisticians,
designers, and even users. Infrastructures are—increasingly—essential parts of
our everyday lives, not only in mass digitization contexts, but in all walks
of life, from file formats and software programs to converging transportation
systems, payment systems, and knowledge infrastructures. Yet, what is most
significant about the majority of infrapolitical institutions is that they are
so mundane; if we notice them at all, they appear to us as boring “lists of
numbers and technical specifications.”84 And their maintenance and
construction often occurs “behind the scenes.”85 There is a politics to these
naturalizing processes, since they influence and frame our moral, scientific,
and aesthetic choices. This is to say that these kinds of infrapolitical
activities often retire or withdraw into a kind of self-evidence in which the
values, choices, and influences of infrastructures are taken for granted and
accorded a kind of obviousness, which is universally accepted. It is therefore
all the more “politically and ethically crucial”86 to recognize the
infrapolitics of mass digitization, not only as contestation and privatized
power games, but also as a mode of existence that values professionalized
standardization measures and mundane routines, not least because these
infrapolitical modes of existence often outlast their material circumstances
(“software outlasts hardware” as John Durham Peters notes).87 In sum,
infrastructures and the infrapolitics they produce yield subtle but
significant world-making powers.

## Power in Mass Digitization

If mass digitization is a product of a particular social configuration and
political infrastructure, it is also, ultimately, a site and an instrument of
power. In a sense, mass digitization is an event that stages a fundamental
confrontation between state and corporate power, while pointing to the
reconfigurations of both as they become increasingly embedded in digital
infrastructures. For instance, such confrontation takes place at the
negotiating table, where cultural heritage directors face the seductive and
awe-inspiring riches of Silicon Valley, as well as its overwhelmingly
intricate contractual layouts and its intimidating entourage of lawyers.
Confrontation also takes place at the level of infrastructural ideology, in
the meeting between twentieth-century standardization ideals and the playful
and flexible network dynamics of the twenty-first century, as seen for
instance in the conjunction of institutionally fixed taxonomies and
algorithmic retrieval systems that include feedback mechanisms. And it takes
place at the level of users, as they experience a gain in some powers and the
loss of others in their identity transition from national patrons of cultural
memory institutions to globalized users of mass digitization assemblages.

These transformations are partly the results of society’s increasing reliance
on network power and its effects. Political theorists Michael Hardt and
Antonio Negri suggested almost two decades ago that among other things, global
digital systems enabled a shift in power infrastructures from robust national
economies and core industrial sectors to interactive networks and flexible
accumulation, creating a “form of network power, which requires the wide
collaboration of dominant nation-states, major corporations, supra-national
economic and political institutions, various NGOs, media conglomerates and a
series of other powers.”88 From this landscape, according to their argument,
emerged a new system of power in which morphing networks took precedence over
reliable blocs. Hardt and Negri’s diagnosis was one of several similar
arguments across the political spectrum that were formed within such a short
interval that “the network” arguably became the “defining concept of our
epoch.”89 Within this new epoch, the old centralized blocs of power crumbled
to make room for new forms of decentralized “bastard” power phenomena, such as
the extensive corporate/state mass surveillance systems revealed by Edward
Snowden and others, and new forms of human rights such as “the right to be
forgotten,” a right for which a more appropriate name would be “the right to
not be found by Google.”90 Network power and network effects are therefore
central to understanding how mass digitization assemblages operate, and why
some mass digitization assemblages are more powerful than others.

The power dynamics we find in Google Books, for instance, are directly related
to the ways in which digital technologies harness network effects: the power
of Google Books grows exponentially as its network expands.91 Indeed, as Siva
Vaidhyanathan noted in his critical work on Google’s role in society, what he
referred to as the “Googlization of books” was ultimately deeply intertwined
with the “Googlization of everything.”92 The networks of Google thus weren’t
external to both the success and the challenges of Google, but deeply endemic
to it, from portals and ranking systems to anchoring (elite) institutions, and
so on. The better Google Books becomes at harnessing network effects, the more
fundamental its influence is in the digital sphere. And Google Books is very
good at harnessing digital network power. Indeed, Google Books reached its
“tipping point” almost before it launched: it had by then already attracted so
many stakeholders that its mere existence decreased the power of any competing
entities—and the fact that its heavy user traffic is embedded in Google only
strengthened its network effects. Google Books’s tipping point tells us little
about its quality in an abstract sense: “tipping points” are more often
attained by proprietary measures, lobbying, expansion, and most typically by a
mixture of all of the above, than by sheer quality.93 This explains not only
the success of Google Books, but also its traction with even its critics:
although Google Books was initially criticized heavily for its poor imagery
and faulty metadata,94 its possible harmful impact on the public sphere,95 and
later, over privacy concerns,96 it had already created a power hub to which,
although they could have navigated around it, masses of people were
nevertheless increasingly drawn.

Network power is endemic not only to concrete digital networks, but also to
globalization at large as a process that simultaneously gives rise to feelings
of freedom of choice and loss of choice.97 Mass digitization assemblages, and
their globalization of knowledge infrastructures, thus crystalize the more
general tendencies of globalization as a process in which people participate
by choice, but not necessarily voluntarily; one in which we are increasingly
pushed into a game of social coordination, where common standards allow more
effective coordination yet also entrap us in their pull for convergence.
Standardization is therefore a key technique of network power: on the one
hand, standardization is linked with globalization (and various neoliberal
regimes) and the attendant widespread contraction of the state, while on the
other hand, standardization implies a reconfiguration of everyday life.98
Standards allow for both minute data analytics and overarching political
systems that “govern at a distance.”99 Standardization understood in this way
is thus a mode of capturing, conceptualizing, and configuring reality, rather
than simply an economic instrument or lubricant. In a sense, standardization
could even be said to be habit forming: through standardization, “inventions
become commonplace, novelties become mundane, and the local becomes
universal.”100

To be sure, standardization has long been a crucial tool of world-making
power, spanning both the early and late-capitalist eras.101 “Standard time,”
as John Durham Peters notes, “is a sine qua non for international
capitalism.”102 Without the standardized infrastructure of time there would be
no global transportation networks, no global trade channels, and no global
communication networks. Indeed, globalization is premised on standardization
processes.

What kind of standardization processes do we find, then, in mass digitization
assemblages? Internet use alone involves direct engagement with hundreds of
global standards, from Bluetooth to Wi-Fi standards, from protocol standards
to file standards such as Word and MP4 and HTTP.103 Moreover, mass
digitization assemblages confront users with a series of additional standards,
from cultural standards of tagging to technical standards of interoperability,
such as the European Data Model (EDM) and Google’s schema.org, or legal
standards such as copyright and privacy regulations. Yet, while these
standards share affinities with the standardization processes of
industrialization, in many respects they also deviate from them. Instead, we
experience in mass digitization “a new form of standardization,”104 in which
differentiation and flexibility gain increasing influence without, however,
dispensing with standardization processes.

Today’s standardization is increasingly coupled with demands for flexibility
and interoperability. Flexibility, as Joyce Kolko has shown, is a term that
gained traction in the 1970s, when it was employed to describe putative
solutions to the problems of Fordism.105 It was seen as an antidote to Fordist
“rigidity”—a serious offense in the neoliberal regime. Thus, while the digital
networks underlying mass digitization are geared toward standardization and
expansion, since “information technology rewards scale, but only to the extent
that practices are standardized,”106 they are also becoming increasingly
flexible, since too-rigid standards hinder network effects, that is, the
growth of additional networks. This is one reason why mass digitization
assemblages increasingly and intentionally break down the so-called “silo”
thinking of cultural memory institutions, and implement standard flexibility
and interoperability to increase their range.107 One area of such
reconfiguration in mass digitization is the taxonomic field, where stable
institutional taxonomic structures are converted to new flexible modes of
knowledge organization like linked data.108 Linked data can connect cultural
memory artifacts as well as metadata in new ways, and the move from a cultural
memory web of interlinked documents to a cultural memory web of interlinked
data can potentially “amplify the impact of the work of libraries and
archives.”109 However, in order to work effectively, linked data demands
standards and shared protocols.

Flexibility allows the user a freer range of actions, and thus potentially
also the possibility of innovation. These affordances often translate into
user freedom or empowerment. Yet flexibility does not necessarily equal
fundamental user autonomy or control. On the contrary, flexibility is often
achieved through decomposition, modularization, and black-boxing, allowing
some components to remain stable while others are changed without implications
for the rest of the system.110 These components are made “fluid” in the sense
that they are dispersed of clear boundaries and allowed multiple identities,
and in that they enable continuity and dissolution.

While these new flexible standard-setting mechanisms are often localized in
national and subnational settings, they are also globalized systems “oriented
towards global agendas and systems.”111 Indeed, they are “glocal”
configurations with digital networks at their cores. The increasing
significance of these glocal configurations has not only cultural but also
democratic consequences, since they often leave users powerless when it comes
to influencing their cores.112 This more fundamental problematic also pertains
to mass digitization, a phenomenon that operates in an environment that
constructs and encourages less Habermasian public spheres than “relations of
sociability,” from which “aggregate outcomes emerge not from an act of
collective decision-making, but through the accumulation of decentralized,
individual decisions that, taken together, nonetheless conduce to a
circumstance that affects the entire group.”113 For example, despite the
flexibility Google Books allows us in terms of search and correlation, we have
very little sway over its construction, even though we arguably influence its
dynamics. The limitations of our influence on the cores of mass digitization
assemblages have implications not only for how we conceive of institutional
power, but also for our own power within these matrixes.

## Notes

1. Borghi 2012, 420. 2. Latour 2008. 3. For more on this, see Hicks 2018;
Abbate 2012; Ensmenger 2012. In the case of libraries, (white) women still
make out the majority of the workforce, but there is a disproportionate amount
of men in senior positions, in comparison with their overall representation;
see, for example, Schonfeld and Sweeney 2017. 4. Meckler 1982. 5. Otlet and
Rayward 1990, chaps. 6 and 15. 6. For a historical and contemporary overview
over some milestones in the use of microfilms in a library context, see Canepi
et al. 2013, specifically “Historic Overview.” See also chap. 10 in Baker
2002. 7. Pfanner 2012. 8.
. 9. Medak et al.
2016. 10. Michael S. Hart, “The History and Philosophy of Project Gutenberg,”
Project Gutenberg, August 1992,
.
11. Ibid. 12. . 13. Ibid. 14. Bruno Delorme,
“Digitization at the Bibliotheque Nationale De France, Including an Interview
with Bruno Delorme,” _Serials_ 24 (3) (2011): 261–265. 15. Alain Giffard,
“Dilemmas of Digitization in Oxford,” _AlainGiffard’s Weblog_ , posted May 29,
2008, in-oxford>. 16. Ibid. 17. Author’s interview with Alain Giffard, Paris, 2010.
18. Ibid. 19. Later, in 1997, François Mitterrand demanded that the digitized
books should be brought online, accessible as text from everywhere. This,
then, was what became known as Gallica, the digital library of BnF, which was
launched in 1997. Gallica contains documents primarily out of copyright from
the Middle Ages to the 1930s, with priority given to French-speaking culture,
hosting about 4 million documents. 20. Imerito 2009. 21. Ambati et al. 2006;
Chen 2005. 22. Ryan Singel, “Stop the Google Library, Net’s Librarian Says,”
_Wired_ , May 19, 2009, library-nets-librarian-says>. 23. Alfred P. Sloan Foundation, Annual Report,
2006,
.
24. Leetaru 2008. 25. Amazon was also a major player in the early years of
mass digitization. In 2003 they gave access to a digital archive of more than
120,000 books with the professed goal of adding Amazon’s multimillion-title
catalog in the following years. As with all other mass digitization
initiatives, Jeff Bezos faced a series of copyright and technological
challenges. He met these with legal rhetorical ingenuity and the technical
skills of Udi Manber, who later became the lead engineer with Google, see, for
example, Wolf 2003. 26. Leetaru 2008. 27. John Markoff, “The Coming Search
Wars,” _New York Times_ , February 1, 2004,
. 28.
Google press release, “Google Checks out Library Books,” December 14, 2004,
.
29. Vise and Malseed 2005, chap. 21. 30. Auletta 2009, 96. 31. Johann Wolfgang
Goethe, _Sprüche in Prosa_ , “Werke” (Weimer edition), vol. 42, pt. 2, 141;
cited in Cassirer 1944. 32. Philip Jones, “Writ to the Future,” _The
Bookseller_ , October 22, 2015, future-315153>. 33. “Jacques Chirac donne l’impulsion à la création d’une
bibliothèque numérique,” _Le Monde_ , March 16, 2005,
donne-l-impulsion-a-la-creation-d-une-bibliotheque-
numerique_401857_3246.html>. 34. “An overwhelming American dominance in
defining future generations’ conception about the world” (author’s own
translation). Ibid. 35. Labi 2005; “The worst scenario we could achieve would
be that we had two big digital libraries that don’t communicate. The idea is
not to do the same thing, so maybe we could cooperate, I don’t know. Frankly,
I’m not sure they would be interested in digitizing our patrimony. The idea is
to bring something that is complementary, to bring diversity. But this doesn’t
mean that Google is an enemy of diversity.” 36. Chrisafis 2008. 37. Béquet
2009. For more on the political potential of archives, see Foucault 2002;
Derrida 1996; and Tygstrup 2014. 38. “Comme vous soulignez, nos bibliothèques
et nos archives contiennent la mémoire de nos culture européenne et de
société. La numérisation de leur collection—manuscrits, livres, images et
sons—constitue un défi culturel et économique auquel il serait bon que
l’Europe réponde de manière concertée.” (As you point out, our libraries and
archives contain the memory of our European culture and society. Digitization
of their collections—manuscripts, books, images, and sounds—is a cultural and
economic challenge it would be good for Europe to meets in a concerted
manner.) Manuel Barroso, open letter to Jacques Chirac, July 7, 2007,
[http://www.peps.cfwb.be/index.php?eID=tx_nawsecuredl&u=0&file=fileadmin/sites/numpat/upload/numpat_super_editor/numpat_editor/documents/Europe/Bibliotheques_numeriques/2005.07.07reponse_de_la_Commission_europeenne.pdf&hash=fe7d7c5faf2d7befd0894fd998abffdf101eecf1](http://www.peps.cfwb.be/index.php?eID=tx_nawsecuredl&u=0&file=fileadmin/sites/numpat/upload/numpat_super_editor/numpat_editor/documents/Europe/Bibliotheques_numeriques/2005.07.07reponse_de_la_Commission_europeenne.pdf&hash=fe7d7c5faf2d7befd0894fd998abffdf101eecf1).
39. Jøsevold 2016. 40. Janssen 2011. 41. Robert Darnton, “Google’s Loss: The
Public’s Gain,” _New York Review of Books_ , April 28, 2011,
. 42.
Palfrey 2015, __ 104. 43. See, for example, DPLA’s Public Library
Partnership’s Project, partnerships>. 44. Karaganis, 2018. 45. Sassen 2008, 3. 46. Coyle 2006; Borghi
and Karapapa, _Copyright and Mass Digitization_ ; Patra, Kumar, and Pani,
_Progressive Trends in Electronic Resource Management in Libraries_. 47.
Borghi 2012. 48. Beagle et al. 2003; Lavoie and Dempsey 2004; Courant 2006;
Earnshaw and Vince 2007; Rieger 2008; Leetaru 2008; Deegan and Sutherland
2009; Conway 2010; Samuelson 2014. 49. The earliest textual reference to the
mass digitization of books dates to the early 1990s. Richard de Gennaro,
Librarian of Harvard College, in a panel on funding strategies, argued that an
existing preservation program called “brittle books” should take precedence
over other preservation strategies such as mass deacidification; see Sparks,
_A Roundtable on Mass Deacidification_ , 46. Later the word began to attain
the sense we recognize today, as referring to digitization on a large scale.
In 2010 a new word popped up, “ultramass digitization,” a concept used to
describe the efforts of Google vis-à-vis more modest large-scale digitization
projects; see Greene 2010 _._ 50. Kevin Kelly, “Scan This Book!,” _New York
Times_ , May 14, 2006, ; Hall 2008; Darnton 2009;
Palfrey 2015. 51. As Alain Giffard notes, “I am not very confident with the
programs of digitization full of technical and economical considerations, but
curiously silent on the intellectual aspects” (Alain Giffard, “Dilemmas of
Digitization in Oxford,” _AlainGiffard’s Weblog_ , posted May 29, 2008,
oxford>). 52. Tiffen 2007. 344. See also Peatling 2004. 53. Sassen 2008. 54.
See _The Authors Guild et al. vs. Google, Inc._ , Amended Settlement Agreement
05 CV 8136, United States District Court, Southern District of New York,
(2009) sec 7(2)(d) (research corpus), sec. 1.91, 14. 55. Informational
capitalism is a variant of late capitalism, which is based on cognitive,
communicative, and cooperative labor. See Christian Fuchs, _Digital Labour and
Karl Marx_ (New York: Routledge, 2014), 135–152. 56. Miksa 1983, 93. 57.
Midbon 1980. 58. Said 1983, 237. 59. For example, the diverse body of
scholarship that employed the notion of “assemblage” as a heuristic and/or
ontological device for grasping and formulating these changing relations of
power and control; in sociology: Haggerty and Ericson 2000; Rabinow 2003; Ong
and Collier 2005; Callon et al. 2016; in geography: Anderson and McFarlane
2011, 124–127; in philosophy: Deleuze and Guattari 1987; DeLanda 2006; in
cultural studies: Puar 2007; in political science: Sassen 2008. The
theoretical scope of these works ranged from close readings of and ontological
alignments with Deleuze and Guattari’s work (e.g., DeLanda), to more
straightforward descriptive employments of the term as outlined in the OED
(e.g., Sassen). What the various approaches held in common was the effort to
steer readers away from thinking in terms of essences and stability toward
thinking about more complex and unstable structures. Indeed, the “assemblage”
seems to have become a prescriptive as much as a diagnostic tool (Galloway
2013b; Weizman 2006). 60. Deleuze 1997; Foucault 2009; Hardt and Negri 2007.
61. DeLanda 2006; Paul Rabinow, “Collaborations, Concepts, Assemblages,” in
Rabinow and Foucault 2011, 113–126, at 123. 62. Latour 2005, __ 28. 63. Ibid.,
35. 64. Tim Stevens, _Cyber Security and the Politics of Time_ (Cambridge:
Cambridge University Press, 2015), 33. 65. Abrahamsen and Williams 2011. 66.
Walker 2003. 67. Deleuze and Guattari 1987, 116. 68. Parisi 2004, 37. 69.
Hacking 1995, 210. 70. Scott 2009. In James C. Scott’s formulation,
infrapolitics is a form of micropolitics, that is, the term refers to
political acts that evade the formal political apparatus. This understanding
was later taken up by Robin D. G. Kelley and Alberto Moreires, and more
recently by Stevphen Shukaitis and Angela Mitropolous. See Kelley 1994;
Shukaitis 2009; Mitropoulos 2012; Alterbo Moreiras, _Infrapolitics: the
Project and Its Politics. Allegory and Denarrativization. A Note on
Posthegemony_. eScholarship, University of California, 2015. 71. James C.
Scott also concedes as much when he briefly links his notion of infrapolitics
to infrastructure, as the “cultural and structural underpinning of the more
visible political action on which our attention has generally been focused”;
Scott 2009, 184. 72. Mitropoulos 2012, 115. 73. Bowker and Star 1999, 319. 74.
Centre National de Ressource Textuelle et Lexicales,
. 75. For an English
etymological examination, see also Batt 1984, 1–6. 76. This is on account of
their malleability and the uncanny way they are used to fit every
circumstance. For more on the potentials and problems of plastic words, see
Pörksen 1995. 77. Edwards 2003, 186–187. 78. Mitropoulos 2012, 117. 79.
Edwards et al. 2012. 80. Peters 2015, at 31. 81. Beck 1996, 1–32, at 18;
Easterling 2014. 82. Adler-Nissen and Gammeltoft-Hansen 2008. 83. Holzer and
Mads 2003. 84. Star 1999, 377. 85. Ibid. 86. Bowker and Star 1999, 326. 87.
Peters 2015, 35. 88. Hardt and Negri 2009, 205. 89. Chun 2017. 90. As argued
by John Naughton at the _Negotiating Cultural Rights_ conference, National
Museum, Copenhagen, Denmark, November 13–14, 2015,
.
91. The “tipping point” is a metaphor for sudden change first introduced by
Morton Grodzins in 1960, later used by sociologists such as Thomas Schelling
(for explaining demographic changes in mixed-race neighborhoods), before
becoming more generally familiar in urbanist studies (used by Saskia Sassen,
for instance, in her analysis of global cities), and finally popularized by
mass psychologists and trend analysts such as Malcolm Gladwell, in his
bestseller of that name; see Gladwell 2000. 92. “Those of us who take
liberalism and Enlightenment values seriously often quote Sir Francis Bacon’s
aphorism that ‘knowledge is power.’ But, as the historian Stephen Gaukroger
argues, this is not a claim about knowledge: it is a claim about power.
‘Knowledge plays a hitherto unrecognized role in power,’ Gaukroger writes.
‘The model is not Plato but Machiavelli.’1 Knowledge, in other words, is an
instrument of the powerful. Access to knowledge gives access to that
instrument of power, but merely having knowledge or using it does not
automatically confer power. The powerful always have the ways and means to use
knowledge toward their own ends. … How can we connect the most people with the
best knowledge? Google, of course, offers answers to those questions. It’s up
to us to decide whether Google’s answers are good enough.” See Vaidhyanathan
2011, 149–150. 93. Easley and Kleinberg 2010, 528. 94. Duguid 2007; Geoffrey
Nunberg, “Google’s Book Search: A Disaster for Scholars,” _Chronicle of Higher
Education,_ August 31, 2009; _The Idea of Order: Transforming Research
Collections for 21st Century Scholarship_ (Washington, DC: Council on Library
and Information Resources, 2010), 106–115. 95. Robert Darnton, “Google’s Loss:
The Public’s Gain,” _New York Review of Books_ , April 28, 2011,
. 96.
Jones and Janes 2010. 97. David S. Grewal, _Network Power: The Social Dynamics
of Globalization_ (New Haven: Yale University Press, 2008). 98. Higgins and
Larner, _Calculating the Social: Standards and the Reconfiguration of
Governing_ (Basingstoke: Palgrave Macmillan, 2010). 99. Ponte, Gibbon, and
Vestergaard 2011; Gibbon and Henriksen 2012. 100. Russell 2014. See also Wendy
Chun on the correlation between habit and standardization: Chun 2017. 101.
Busch 2011. 102. Peters 2015, 224. 103. DeNardis 2011. 104. Hall and Jameson
1990. 105. Kolko 1988. 106. Agre 2000. 107. For more on the importance of
standard flexibility in digital networks, see Paulheim 2015. 108. Linked data
captures the intellectual information users add to information resources when
they describe, annotate, organize, select, and use these resources, as well as
social information about their patterns of usage. On one hand, linked data
allows users and institutions to create taxonomic categories for works on a
par with cultural memory experts—and often in conflict with such experts—for
instance by linking classical nudes with porn; and on the other hand, it
allows users and institutions to harness social information about patterns of
use. Linked data has ideological and economic underpinnings as much as
technical ones. 109.  _The National Digital Platform: for Libraries, Archives
and Museums_ , 2015, report-national-digital-platform>. 110. Petter Nielsen and Ole Hanseth, “Fluid
Standards. A Case Study of a Norwegian Standard for Mobile Content Services,”
under review,
.
111. Sassen 2008, 3. 112. Grewal 2008. 113. Ibid., 9.

# II
Mapping Mass Digitization

# 2
The Trials, Tribulations, and Transformations of Google Books

## Introduction

In a 2004 article in the cultural theory journal _Critical Inquiry_ , book
historian Roger Chartier argued that the electronic world had created a triple
rupture in the world of text: by providing new techniques for inscribing and
disseminating the written word, by inspiring new relationships with texts, and
by imposing new forms of organization onto them. Indeed, Chartier foresaw that
“the originality and the importance of the digital revolution must therefore
not be underestimated insofar as it forces the contemporary reader to
abandon—consciously or not—the various legacies that formed it.”1 Chartier’s
premonition was inspired by the ripples that digitization was already
spreading across the sea of texts. People were increasingly writing and
distributing electronically, interacting with texts in new ways, and operating
and implementing new textual economies.2 These textual transformations __ gave
rise to a range of emotional reactions in readers and publishers, from
catastrophizing attititudes and pessimism about “the end of the book” to the
triumphalist mythologizing of liquid virtual books that were shedding their
analog ties like butterflies shedding their cocoons.

The most widely publicized mass digitization project to date, Google Books,
precipitated the entire emotional spectrum that could arise from these textual
transversals: from fears that control over culture was slipping from authors
and publishers into the hands of large tech companies, to hopeful ideas about
the democratizing potential of bringing knowledge that was once locked up in
dusty tomes at places like Harvard and Stanford, and to a utopian
mythologizing of the transcendent potential of mass digitization. Moreover,
Google Books also affected legal and professional transformations of the
infrastructural set-up of the book, creating new precedents and a new
professional ethos. The cultural, legal, and political significance of Google
Books, whether positive or negative, not only emphasizes its fundamental role
in shaping current knowledge landscapes, it also allows us to see Google Books
as a prism that reflects more general political tendencies toward
globalization, privatization, and digitization, such as modulations in
institutional infrastructures, legal landscapes, and aesthetic and political
conventions. But how did the unlikely marriage between a tech company and
cultural memory institutions even come about? Who drove it forward, and around
and within which infrastructures? And what kind of cultural memory politics
did it produce? The following sections of this chapter will address some of
these problematics.

## The New Librarians

It was in the midst of a turbulent restructuring of the world of text, in
October 2004 at the Frankfurt International Book Fair, that Larry Page and
Sergey Brin of Google announced the launch of Google Print, a cooperation
between Google and leading Anglophone publishers. Google Print, which later
became Google Partner Program, would significantly alter the landscape and
experience of cultural memory, as well as its regulatory infrastructures. A
decade later, the traditional practices of reading, and the guardianship of
text and cultural works, had acquired entirely new meanings. In October 2004,
however, the publishing world was still unaware of Google’s pending influence
on the institutional world of cultural memory. Indeed, at that time, Amazon’s
mounting dominance in the field of books, which began a decade earlier in
1995, appeared to pose much more significant implications. The majority of
publishers therefore greeted Google’s plans in Frankfurt as a welcome
alternative to Jeff Bezos’s growing online behemoth.

Larry Page and Sergey Brin withheld a few details from their announcement at
Frankfurt, however; Google’s digitization plans would involve not only
cooperation with publishers, but also with libraries. As such, what would
later become Google Books would in fact consist of two separate, yet
interrelated, programs: Google Print (which would later become Google Partner
Program) and Google Library Project. In all secrecy, Google had for many
months prior to the Frankfurt Book Fair worked with select libraries in the US
and the UK to digitize their holdings. And in December 2004 the true scope of
Google’s mass digitization plans were revealed: what Page and Brin were
building was the foundation of a groundbreaking cultural memory archive,
inspired by the myth of Alexandria.3 The invocation of Alexandria situated the
nascent Google Books project in a cultural schema that historicized the
project as a utopian, even moral and idealist, project that could finally,
thanks to technology, exceed existing human constraints—legal, political, and
physical.4

Google’s utopian discourse was not foreign to mass digitization enthusiasts.
Indeed, it was the _langue du jour_ underpinning most large-scale digitization
projects, a discourse nurtured and influenced by the seemingly borderless
infrastructure of the web itself (which was often referred to in
universalizing terms). 5 Yet, while the universalizing discourse of mass
digitization was familiar, it had until then seemed like aspirational talk at
best, and strategic policy talk in the face of limited public funding, complex
copyright landscapes, and lumbering infrastructures, at worst. Google,
however, faced the task with a fresh attitude of determination and a will to
disrupt, as well as a very different form of leverage in terms of
infrastructural set-up. Google was already the world’s preferred search
engine, having mastered the tactical skill of navigating its users through
increasingly complex information landscapes on the web, and harvesting their
metadata in the process to continuously improve Google’s feedback systems.
Essentially ever-larger amounts of information (understood here as “users”)
were passing through Google’s crawling engines, and as the masses of
information in Google’s server parks grew, so did their computational power.
Google Books, then, as opposed to most existing digitization projects, which
were conceived mainly in terms of “access,” was embedded in the larger system
of Google that understood the power and value of “feedback,” collecting
information and entering it into feedback loops between users, machines, and
engineers. Google also understood that information power didn’t necessarily
lie in owning all the information they gave access to, but rather in
controlling the informational processes themselves.

Yet, despite Google’s advances in information seeking behaviors, the idea of
Google Books appeared as an odd marriage. Why was a private company in Silicon
Valley, working in the futuristic and accelerating world of software and fluid
information streams, intent on partnering up with the slow-paced world of
cultural memory institutions, traditionally more concerned with the past?
Despite the apparent clash of temporal and cultural regimes, however, Google
was in fact returning home to its point of inception. Google was born of a
research project titled the Stanford Integrated Digital Library Project, which
was part of the NSF’s Digital Libraries Initiative (1994–1999). Larry Page and
Sergey Brin were students then, working on the Stanford component of this
project, intending to develop the base technologies required to overcome the
most critical barriers to effective digital libraries, of which there were
many.6 Page’s and Brin’s specific project, titled Google, was presented as a
technical solution to the increasing amount of information on the World Wide
Web.7 At Stanford, Larry Page also tried to facilitate a serious discussion of
mass digitization at Stanford, and of whether or not it was feasible. But his
ideas received little support, and he was forced to leave the idea on the
drawing board in favor of developing search technologies.8

In September 1998, Sergey Brin and Larry Page left the library project to
found Google as a company and became immersed in search engine technologies.
However, a few years later, Page resuscitated the idea of mass digitization as
a part of their larger self-professed goal to change the world of information
by increasing access, scaling the amount of information available, and
improving computational power. They convinced Eric Schmidt, the new CEO of
Google, that the mass digitization of cultural works made sense not only from
a information perspective, but also from a business perspective, since the
vast amounts of information Google could extract from books would improve
Google’s ability to deliver information that was hitherto lacking, and this
new content would eventually also result in an increase in traffic and clicks
on ads.9

## The Scaling Techniques of Mass Digitization

A series of experiments followed on how to best approach the daunting task.
The emergence and decay of these experiments highlight the ways in which mass
digitization assemblages consist not only of thoughts, ideals, and materials,
but also a series of cultural techniques that entwine temporality,
materiality, and even corporeality. This perspective on mass digitization
emphasizes the mixed nature of mass digitization assemblages: what at first
glance appears as a relatively straightforward story about new technical
inventions, at a closer look emerges as complex entanglements of human and
nonhuman actors, with implications not only for how we approach it as a legal-
technical entity but also an infrapolitical phenomenon. As the following
section shows, attending to the complex cultural techniques of mass
digitization (its “how”) enables us to see that its “minor” techniques are not
excluded from or irrelevant to, but rather are endemic to, larger questions of
the infrapolitics of digital capitalism. Thus, Google’s simple technique of
scaling scanning to make the digitization processes go faster becomes
entangled in the creation of new habits and techniques of acceleration and
rationalization that tie in with the politics of digital culture and digital
devices. The industrial scaling of mass digitization becomes a crucial part of
the industrial apparatus of big data, which provide new modes of inscription
for both individuals and digital industries that in turn can be capitalized on
via data-mining, just as it raises questions of digital labor and copyright.

Yet, what kinds of scaling techniques—and what kinds of investments—Google
would have to leverage to achieve its initial goals were still unclear to
Google in those early years. Larry Page and co-worker Marissa Mayer therefore
began to experiment with the best ways to proceed. First, they created a
makeshift scanning device, whereby Marissa Mayer would turn the page and Larry
Page would click the shutter of the camera, guided by the pace of a
metronome.10 These initial mass digitization experiments signaled the
industrial nature of the mass digitization process, providing a metronomic
rhythm governed by the implacable regularity of the machine, in addition to
the temporal horizon of eternity in cultural memory institutions (or at least
of material decay).11 After some experimentation with scale and time, Google
bought a consignment of books from a second-hand book store in Arizona. They
scanned them and subsequently experimented with how to best index these works
not only by using information from the book, but also by pulling data about
the books from various other sources on the web. These extractions allowed
them to calculate a work’s relevance and importance, for instance by looking
at the number of times it had been referred to.12

In 2004 Google was also granted patent rights to a scanner that would be able
to scan the pages of works without destroying them, and which would make them
searchable thanks to sophisticated 3D scanning and complex algorithms.13
Google’s new scanner used infrared camera technology that detected the three-
dimensional shape and angle of book pages when the book was placed in the
scanner. The information from the book was then transmitted to Optical
Character Recognition (OCR), which adjusted image focus and allowed the OCR
software to read images of curved surfaces more accurately.

![11404_002_fig_001.jpg](images/11404_002_fig_001.jpg)

Figure 2.1 François-Marie Lefevere and Marin Saric. “Detection of grooves in
scanned images.” U.S. Patent 7508978B1. Assigned to Google LLC.

These new scanning technologies allowed Google to unsettle the fixed content
of cultural works on an industrial scale and enter them into new distribution
systems. The untethering and circulation of text already existed, of course,
but now text would mutate on an industrial scale, bringing into coexistence a
multiplicity of archiving modes and textual accumulation. Indeed, Google’s
systematic scaling-up of already existing technologies on an industrial and
accelerated scale posed a new paradigm in mass digitization, to a much larger
extent than, for instance, inventions of new technologies.14 Thus, while
Google’s new book scanners did expand the possibilities of capturing
information, Google couldn’t solve the problem of automating the process of
turning the pages of the books. For that they had to hire human scanners who
were asked to manually turn pages. The work of these human scanners was
largely invisible to the public, who could only see the books magically
appearing online as the digital archive accumulated. The scanners nevertheless
left ghostly traces, in the form of scanning errors such as pink fingers and
missing and crumbled pages—visual traces that underlined the historically
crucial role of human labor in industrializing and automating processes.15
Indeed, the question of how to solve human errors in the book scanning process
led to a series of inventive systems, such as the patent granted to Google in
2009 (filed in 2003), which describes a system that would minimize scanning
errors with the help of music.16 Later, Google open sourced plans for a book
scanner named “Linear Book Scanner” that would turn the pages automatically
with the help of a vacuum cleaner and a cleverly designed sheet metal
structure, after passing them over two image sensors taken from a desktop
scanner.17

Eventually, after much experimentation, Google consolidated its mass
digitization efforts in collaboration with select libraries.18 While some
institutions immediately and enthusiastically welcomed Google’s aspirations as
aligning with their own mission to improve access to information, others were
more hesitant, an institutional vacillation that hinted ominously at
controversy to come. Some libraries, such as the University of Michigan,
greeted the initiative with enthusiasm, whereas others, such as the Library of
Congress, saw a red flag pop up: copyright, one of the most fundamental
elements in the rights of texts and authors.19 The Library of Congress
questioned whether it was legal to scan and index books without a rights
holder’s permission. Google, in response, argued that it was within the fair
use provisions of the law, but the argument was speculative in so far as there
was no precedent for what Google was going to do. While some universities
agreed with Google’s views on copyright and shared its desire to disrupt
existing copyright practices, others allowed Google to make digital copies of
their holdings (a precondition for creating an index of it). Hence, some
libraries gave full access, others allowed only the scanning of books in the
public domain (published before 1923), and still others denied access
altogether. While the reticence of libraries was scattered, it was also a
precursor of a much more zealous resistance to Google Books, an opposition
that was mounted by powerful voices in the cultural world, namely publishers
and authors, and other commercial infrastructures of cultural memory.

![11404_002_fig_002.jpg](images/11404_002_fig_002.jpg)

Figure 2.2 Joseph K. O’Sullivan, Alexander Proudfooot, and Christopher R.
Uhlik. “Pacing and error monitoring of manual page turning operator.” U.S.
Patent 7619784B1. Assigned to Google LLC, Google Technology Holdings LLC.

While Google’s announcement of its cooperation with publishers at the
Frankfurt Book Fair was received without drama—even welcomed by many—the
announcement of its cooperation with libraries a few months later caused a
commercial uproar. The most publicized point of contestation was the fact that
Google was now not only displaying books in cooperation with publishers, but
also building a library of its own, without remunerating publishers and
authors. Why would readers buy books if they could read them free online?
Moreover, the Authors Guild worried that Google’s digital library would
increase the risk of piracy. At a deeper level, the case also emphasized
authors’ and publishers’ desire to retain control over their copyrighted works
in the face of the threat that the Library Project (unlike the Partner
Program) was posing: Google was digitizing without the copyright holder’s
permission. Thus, to them, the Library Project fundamentally threatened their
copyrights and, on a more fundamental level, existing copyright systems. Both
factors, they argued, would make book buying a superfluous activity.20 The
harsher criticisms framed Google Books as a book thief rather than as a global
philanthropist.21 Google, on its behalf, launched a defense of their actions
based on the notion of “fair use,” which as the following section shows,
eventually became the fundamental legal question.

## Infrastructural Transformations

Google Books became the symbol of the painful confusion and territorial
battles that marred the publishing world as it underwent a transformation from
analog to digital. The mounting and diverse opposition to Google Books was
thus not an isolated affair, but rather a persistent symptom—increasingly loud
stress signals emitting from the infrastructural joints of the analog realm of
books as it buckled under the strain of digital logic. As media theorist John
Durham Peters (drawing on media theorist Harold Innis) notes, the history of
media is also an “occupational history” that tells the tales of craftspeople
mastering medium-specific skills tactically battling for monopolies of
knowledge and guarding their access.22 And in the occupational history of
Google Books, the craftspeople of the printed book were being challenged by a
new breed of artificers who were excelling not so much in how to print, which
book sellers to negotiate with, or how to sell books to people, but rather in
the medium-specific tactical skills of the digital, such as building software
and devising search technologies, skills they were leveraging to their own
gain to create new “monopolies of knowledge” in the process.

As previously mentioned, the concerns expressed by publishers and authors in
regards to remuneration was accompanied by a more abstract sense of a loss of
control over their works and how this loss of control would affect the
copyrights. These concerns did not arise out of thin air, but were part of a
more general discourse on digital information as something that _cannot_ be
secured and controlled in the same way as analog commodities can. Indeed, it
seemed that authors and publishers were part of a world entirely different
from Google Books: while publishers and authors were still living in and
defending a “regime of scarcity,” 23 Google Books, by contrast, was busy
building a “realm of plenitude and infinite replenishment.” As such, the clash
between the traditional infrastructures of the analog book and the new
infrastructures of Google Books was symptomatic of the underlying radical
reorganization of information from a state of trade and exchange to a state of
constant transmission and contagion.24

Foregrounding the fair use defense25, Google argued that the public benefits
of scanning outweighed the negative consequences for authors.26 Influential
legal scholars such as Lawrence Lessig, among others, supported this argument,
suggesting that inclusion in a search engine in a way that does not erode the
value of the book was of such societal importance that it should be deemed
legal.27 The copyright owners, however, insisted that the burden should be on
Google to request permission to scan each work.28

Google and copyright owners reached a proposed settlement on October 28, 2008.
The proposal would allow Google not only to continue its scanning activities
and to show free snippets online, but would also give Google exclusive rights
to sell digital copies of out-of-print books. In return, Google would provide
all libraries in the United States with one free subscription to the digital
database, but Google could also sell additional subscriptions. Moreover,
Google was to pay $125 million, part of which would go to the construction of
a Book Rights Registry that identified rights holders and handled payments to
lawyers.29 Yet before the settlement was even formally treated, a mounting
opposition to it was launched in public.

The proposed settlement was received with harsh words, for instance by
Internet archivist Brewster Kahle and legal scholar Lawrence Lessig, who
opposed the settlement with words ranging from “insanity” to “cultural
asphyxiation” and “information monopoly.”30 Privacy proponents also spoke out
against Google Books, bringing attention to the implications of Google being
able to follow and track reading habits, among other things.31 The
organization Privacy Authors, including writers such as Jonathan Lethem, Bruce
Schneier, and Michael Chabon, and publishers, argued that although Google
Books was an “extremely exciting” project, it failed in its current form to
protect the privacy of readers, thus creating a “real risk of disclosure” of
sensitive information to “prying governmental entities and private litigants,”
potentially giving rise to a “chilling effect,” hurting not only readers but
also authors and publishers, not least those writing about sensitive or
controversial topics.32 The Association of Libraries also raised a set of
concerns, such as the cost of library subscriptions and privacy.33 And most
predictably, companies such as Amazon and Microsoft, who also had a stake in
mass digitization, opposed the settlement; Microsoft even funded some nuanced
research efforts into its implications.34 Finally, and most damningly, the
Department of Justice decided to get involved with an antitrust argument.

By this point, opposition to the Google Books project, as it was outlined in
the proposed settlement, wasn’t only motivated by commercial concerns; it was
now also motivated by a public that framed Google’s mass digitization project
as a parasitical threat to the public sphere itself. The framing of Google as
a potential menace was a jarring image that stood in stark contrast to Larry
Page’s and Sergey Brin’s philanthropic attitudes and to Google’s famous “Don’t
be evil” slogan. The public reaction thus signaled a change in Google’s
reputation as the company metamorphosed in the public eye from a small
underdog company to a multinational corporation with a near-monopoly in the
search industry. Google’s initially inspiring approach to information as a
realm of plenitude now appeared in the public view more similar to the actions
of megalomaniac land-grabbers.

Google, however, while maintaining its universalizing mission regarding
information, also countered the accusations of monopoly building, arguing that
potential competitors could just step up, since nothing in the agreements
entered into by the libraries and Google “precludes any other company or
organization from pursuing their own similar effort.”35 Nevertheless Judge
Denny Chin denied the settlement in March 2011 with the following statement:
“The question presented is whether the ASA is fair, adequate, and reasonable.
I conclude that it is not.”36 Google left the proposed settlement behind, and
appealed the decision of their initial case with new amicus briefs focusing on
their argument that book scanning was fair use. They argued that they were not
demanding exclusivity on the information they scanned, that they didn’t
prohibit other actors from digitizing the works they were digitizing, and that
their main goal was to enrich the public sphere with more information, not to
build an information monopoly. In July 2013 Judge Denny Chin issued a new
opinion confirming that Google Books was indeed fair use.37 Chin’s opinion was
later consolidated in a major victory for Google in 2015 when Judge Pierre
Leval in the Second Circuit Court legalized Google Books with the words
“Google’s unauthorized digitizing of copyright-protected works, creation of a
search functionality, and display of snippets from those works are non-
infringing fair uses.“38 Leval’s decision marked a new direction, not only for
Google Books, but also for mass digitization in general, as it signaled a
shift in cultural expectations about what it means to experience and
disseminate cultural artifacts.

Once again, the story of Google Books took a new turn. What was first
presented as a gift to cultural memory institutions and the public, and later
as theft from and threat to these same entities, on closer inspection revealed
itself as a much more complex circulatory system of expectations, promises,
risks, and blame. Google Books thus instigated a dynamic and forceful
connection between Google and cultural memory institutions, where the roles of
giver and receiver, and the first giver and second giver/returner, were
difficult to decode. Indeed, the binding nature of the relationship between
Google Books and cultural memory institutions proved to be much more complex
than the simple physical exchange of books and digital files. As the next
section outlines, this complex system of cultural production was held together
by contractual arrangement—central joints, as it were, connecting data and
works, public and private, local and global, in increasingly complex ways. For
Google Books, these contractual relations appear as the connective tissues
that make these assemblages possible, and which are therefore fundamental to
their affective dimensions.

## The Infrapolitics of Contract

In common parlance a contract is a legal tool that formalizes a “mutual
agreement between two or more parties that something shall be done or forborne
by one or both,” often enforceable by law.39 Contractual systems emerged with
the medieval merchant regime, and later evolved with classical liberalism into
an ideological revolt against paternalist systems as nothing less than
freedom, a legal construct that could destroy the sentimental bonds of
personal dependence.40 As the classic liberal social scientist William Graham
Sumner argued, “[c]ontract … is rational … realistic, cold, and matter-of-
fact.” The rational nature of contracts also affected their temporality, since
a contract endures only “so long as the reason for it endures,” and their
spatiality, relegating any form of sentiment from the public sphere to “the
sphere of private and personal relations.”41

Sentiments prevailed, however, as the contracts tying together Google and
cultural memory institutions emerged. Indeed, public and professional
evaluations of the agreements often took an affective, even sexualized, form.
The economist Paul Courant situated libraries “in bed with Google”42; library
consultant and media experts Jeff Ubois and Peter B. Kaufman recounted _how_
they got in bed with Google—“[w]e were approached singly, charmed in
confidence, the stranger was beguiling, and we embraced” 43; communication
scholar Evelyn Bottando announced that “libraries not only got in bed with
Google. They got married”44; and librarian Jessamyn West finally pondered on
the relationship ruins, “[s]till not sure, after all that, how we got this all
so wrong. Didn’t we both want the same thing? Maybe it really wasn’t us, it
was them. Most days it’s hard to remember what we saw in Google. Why did we
think we’d make good partners?”45

The evaluative discourse around Google Books dispels the idea of contracts as
dispassionate transactions for services and labor, showing rather that
contracts are infrapolitical apparatuses that give rise to emotions and
affect; and that, moreover, they are systems of doctrines, relations, and
social artifacts that organize around specific ideologies, temporalities,
materialities, and techniques.46 First and foremost, contracts give rise to
new kinds of infrastructures in the field of cultural memory: they mediate,
connect, and converge cultural memory institutions globally, giving rise to
new institutional networks, in some cases increasing globalization and
mobility for both users and objects, and in other cases restricting the same.
The Google Books contracts display both technical and symbolic aspects: as
technical artifacts they establish intricate frameworks of procedures,
commitments, rights, and incentives for governing the transactions of cultural
memory artifacts and their digitized copies. As symbolic artifacts they evoke
normative principles, expressing different measures of good will toward
libraries, but also—as all contracts do—introduce the possibility of distrust,
conflict and betrayal.47

Despite their centrality to mass digitization assemblages, and although some
of them have been made available to the public,48 the content of these
particular contracts still suffer from the epistemic gap incurred in practical
and symbolic form by Google’s Agreements and Non-Disclosure Agreements (NDA),
a kind of agreement most libraries are required to sign when entering the
agreement. Like all contracts, the individual contracts signed by the
partnership libraries vary in nature and have different implications. While
many of Google’s agreements may be publically available, they have often only
been made public through requests and transparency mechanisms such as the
Freedom of Information Act. As the Open Rights Alliance notes in their
publication of the agreement entered between the British Library and Google,
“We asked the British Library for a copy of the agreement with Google, which
was not uploaded to their transparency website with other similar contracts,
as it didn’t involve monetary exchange. This may be a loophole transparency
activists want to look at. After some toing and froing with the Freedom of
Information Act we got a copy.”49

While the culture of contractual secrecy is native to the business world, with
its safeguarding of business processes, and is easily navigated by business
partners, it is often opposed to the ethos of state-subsidized cultural
institutions who “draw their financial and moral support from a public that
expects transparency in their activities, ranging from their materials
acquisitions to their business deals.”50 For these reasons, library
organizations have recommended that nondisclosure agreements should be avoided
if possible, and minimized if they are necessary.51 Google, in response, noted
on its website that: “[t]hough not all of the library contracts have been made
public, we can say that all of them are non-exclusive, meaning that all of our
library partners are free to continue their own scanning projects or work with
others while they work with Google to digitize their books.”52

Regardless of their contractual content and later publication, the contracts
are a vital instrument in Google’s broader management of visibility. As Mikkel
Flyverbom, Clare Birchall, and others have argued, this practice of visibility
management—which they define as “the many ways in which organizations seek to
curate and control their presence, relations, and comprehension vis-à-vis
their surroundings” through practices of transparency, secrecy, opacity,
surveillance, and disclosure—is in the digital age a complex issue closely
tied to the question of governance and power. While each publication act may
serve to create an uncomplicated picture of transparency, it nevertheless
happens in a paradoxical global regulatory environment that on the one hand
encourages “sunshine” laws that demand that governments, corporations, and
civil-sector organizations provide access to information, yet on the other
hand also harbors regulatory agencies that seek mechanisms and rules by which
to keep information hidden. Thus, as Flyverbom et al. conclude, the “everyday
practices of organizing invariably implicate visibility management,” whose
valences are “attached to transparency and opacity” that are not simple and
straightforward, but rather remain “dependent upon the actor, the context, and
the purpose of organizations and individuals.”53

Steven Levy recounts how Google began its scanning operations in “near-total
stealth,” a “cloak-and-dagger” approach that stood in contrast to Google’s
public promotion of transparency as a new mode of existence. As Levy argues,
“[t]he secrecy was yet another expression of the paradox of a company that
sometimes embraced transparency and other times seemed to model itself on the
NSA.”54 Yet, while secrecy practices may have suited some of Google’s
operations, they sit much more uneasily with their book scanning programs: “If
Google had a more efficient way to scan books, sharing the improved techniques
could benefit the company in the long run—inevitably, much of the output would
find its way onto the web, bolstering Google’s indexes. But in this case,
paranoia and a focus on short-term gain kept the machines under wraps.”55 The
nondisclosure agreements show that while boundaries may be blurred between
Google Books and libraries, we may still identify different regulatory models
and modes of existence within their networks, including the explicit _library
ethos_ (in the Weberian sense of the term) of public access, not only to the
front end but also to some areas of the back end, and the business world’s
secrecy practices. 56

Entering into a mass digitization public-private partnership (PPP) with a
corporation such as Google is thus not only a logical and pragmatic next step
for cultural memory institutions, it is also a political step. As already
noted, Google Books, through its embedding in Google, injects cultural memory
objects into new economic and cultural infrastructures. These infrastructures
are governed less by the hierarchical world of curators, historians, and
politicians, and more by feedback networks of tech companies, users, and
algorithms. Moreover, they forge ever closer connections to data-driven market
logics, where computational rather than representational power counts. Mass
digitization PPPs such as Google Books are thus also symptoms of a much more
pervasive infrapolitical situation, in which cultural memory institutions are
increasingly forced to alter their identities from public caretakers of
cultural heritage to economic actors in the EU internal market, controlled by
the framework of competition law, time-limited contracts, and rules on state
aid.57 Moreover, mastering the rules of these new infrastructures is not
necessarily an easy feat for public institutions.58 Thus, while Google claims
to hold a core commitment regarding free digital access to information, and
while its financial apparatus could be construed as making Google an eligible
partner in accordance with the EU’s policy objectives toward furthering
public-private partnerships in Europe,59 it is nevertheless, as legal scholar
Maurizio Borghi notes, relevant to take into account Google’s previous
monopoly-building history.60

## The Politics of Google Books

A final aspect of Google Books relates to the universal aspiration of Google
Books’s collection, its infrapolitics, and what it empirically produces in
territorial terms. As this chapter’s previous sections have outlined, it was
an aspiration of Google Books to transcend the cultural and political
limitations of physical cultural memory collections by gathering the written
material of cultural memory institutions into one massive digitized
collection. Yet, while the collection spans millions of works in hundreds of
languages from hundreds of countries,61 it is also clear that even large-scale
mass digitization processes still entail procedures of selection on multiple
levels from libraries to works. These decisions produce a political reality
that in some respects reproduces and accentuates the existing politics of
cultural memory institutions in terms of territorial and class-based
representations, and in other respects give rise to new forms of cultural
memory politics that part ways with the political regimes of traditional
curatorial apparatuses.

One obvious area in which to examine the politics produced by the Google Books
assemblage is in the selection of libraries that Google chooses to partner
with.62 While the full list of Google Books partners is not disclosed on
Google’s own webpage, it is clear from the available list that, up to now,
Google Books has mainly partnered with “great libraries,” such as elite
university libraries and national libraries. The rationale for choosing these
libraries has no doubt been to partner up with cultural memory institutions
that preside over as much material as possible, and which are therefore able
to provide more pieces of the puzzle than, say, a small-town public library
that only presides over a fraction of their collections. Yet, while these
libraries provide Google Books with an impressive and extensive collection of
rare and valuable artifacts that give the impression of a near-universal
collection, they nevertheless also contain epistemological and historical
gaps. Historian and digital humanist Andrew Prescott notes, for example, the
limited collections of literature written by workers and other lower-class
people in the early eighteenth century in elite libraries. This institutional
lack creates a pre-filtered collection in Google Books, favoring “[t]hose
writers of working class origins who had a success story to report, who had
become distinguished statesmen, successful businessmen, religious leaders and
so on,” that is, the people who were “able to find commercial publishers who
were interested in their story.”63 Google’s decision to partner with elite
libraries thus inadvertently reproduces the class-based biases of analog
cultural memory institutions.

In addition to the reproduction of analog class-based bias in its digital
collection, the Google Books corpus also displays a genre bias, veering
heavily toward scientific publications. As mathematicians Eitan Pechenik et
al. show, the contents of the Google Books corpus in the period of the 1900s
is “increasingly dominated by scientific publications rather than popular
works,” and “even the first data set specifically labeled as fiction appears
to be saturated with medical literature.”64 The fact that Google Books is
constellated in such a manner thus challenges a “vast majority of existing
claims drawn from the Google Books corpus,” just as it points to the need “to
fully characterize the dynamics of the corpus before using these data sets to
draw broad conclusions about cultural and linguistic evolution.”65

Last but not least, Google Books’s collection still bespeaks its beginnings:
it still primarily covers Anglophone ground. There is hardly any literature
that reviews the geographic scope in Google Books, but existing work does
suggest that Google is still heavily oriented toward US-based libraries.66
This orientation does not necessarily give rise to an Anglophone linguistic
hegemony, as some have feared, since many of the Anglophone libraries hold
considerable collections of foreign language books. But it does invariably
limit its collections to the works in foreign languages that the elite
libraries deemed worthy of preserving. The gaps and biases of Google Books
reveal it to be less of a universal and monolithic collection, and more of an
impressive, but also specific and contingent, assemblage of works, texts, and
relations that is determined by the relations Google Books has entered into in
terms of class, discipline, and geographical scope.

Google Books is not only the result of selection processes on the level of
partnering institutions, but also on the level of organizational
infrastructure. While the infrastructures of Google Books in fact depart from
those of its parent company in many regards to avoid copyright infringement
charges, there is little doubt, however, that people working actively on
Google’s digitization activities (included here are both users and Google
employees) are also globally distributed in networked constellations. The
central organization for cultural digitization, the Google Cultural Institute,
is located in Paris, France. Yet the people affiliated with this hub are
working across several countries. Moreover, people working on various aspects
of Google Books, from marketing to language technology, to software
developments and manual scanning processes, are dispersed across the globe.
And it is perhaps in this way that we tend to think of Google in general—as a
networked global company—and for good reasons. Google has been operating
internationally almost for as long as it has been around. It has offices in
countries all over the globe, and works in numerous languages. Today it is one
of the most important global information institutions, and as more and more
people turn to Google for its services, Google also increasingly reflects
them—indeed they enter into a complex cognitive feedback mechanism system.
Google depends on the growing diversity of its “inhabitants” and on its
financial and cultural leverage on a global scale, and to this effect it is
continuously fine-tuning its glocalization strategies, blending the universal
and the particular. This glocal strategy does not necessarily create a
universal company, however; it would be more correct to say that Google’s
glocality brings the globe to Google, redefining it as an “American”
company.67 Hence, while there is little doubt that Google, and in effect
Google Books, increasingly tailors to specific consumers,68 and that this
tailoring allows for a more complex global representation generated by
feedback systems, Google’s core nevertheless remains lodged on American soil.
This is underlined by the fact that Google Books still effectively belongs to
US jurisdiction.69 Google Books is thus on the one hand a globalized company
in terms of both content and institutional framework; yet it also remains an
_American_ multinational corporation, constrained by US regulation and social
standards, and ultimately reinforcing the capacities of the American state.
While Google Books operates as a networked glocal project with universal
aspirations, then, it also remains fenced in by its legal and cultural
apparatuses.

In sum, just as a country’s regulatory and political apparatus affects the
politics of its cultural memory institutions in the analog world, so is the
politics of Google Books co-determined by the operations of Google. Thus,
curatorial choices are made not only on the basis of content, but also of the
location of server parks, existing company units, lobbying efforts, public
policy concerns, and so on. And the institutional identity of Google Books is
profoundly late-sovereign in this regard: on one hand it thrives on and
operates with horizontal network formations; on the other, it still takes into
account and has to operate with, and around, sovereign epistemologies and
political apparatuses. These vertical and horizontal lines ultimately rewire
the politics of cultural memory, shifting the stakes from sovereign
territorial possessions to more functional, complex, and effective means of
control.

## Notes

1. Chartier 2004. 2. As philosopher Jacques Derrida noted anecdotally on his
colleagues’ way of reading, “some of my American colleagues come along to
seminars or to lecture theaters with their little laptops. They don’t print
out; they read out directly, in public, from the screen. I saw it being done
as well at the Pompidou Center [in Paris] a few days ago. A friend was giving
a talk there on American photography. He had this little Macintosh laptop
there where he could see it, like a prompter: he pressed a button to scroll
down his text. This assumed a high degree of confidence in this strange
whisperer. I’m not yet at that point, but it does happen.” (Derrida 2005, 27).
3. As Ken Auletta recounts, Eric Schmidt remembers when Page surprised him in
the early 2000s by showing off a book scanner he had built which was inspired
by the great library of Alexandria, claiming that “We’re going to scan all the
books in the world,” and explaining that for search to be truly comprehensive
“it must include every book ever published.” Page literally wanted Google to
be a “super librarian” (Auletta 2009, __ 96). 4. Constraints of a physical
character (how to digitize and organize all this knowledge in physical form);
legal character (how to do it in a way that suspends existing regulation); and
political character (how to transgress territorial systems). 5. Take, for
instance, project Bibliotheca Universalis, comprising American, Japanese,
German, and British libraries among others, whose professed aim was “to
exploit existing digitization programs in order to … make the major works of
the world’s scientific and cultural heritage accessible to a vast public via
multimedia technologies, thus fostering … exchange of knowledge and dialogue
over national and international borders.” It was a joint project of the French
Ministry of Culture, the National Library of France, the Japanese National
Diet Library, the Library of Congress, the National Library of Canada,
Discoteca di Stato, Deutsche Bibliothek, and the British Library:
. The project took its name
from the groundbreaking Medieval publication _Bibliotecha Universalis_
(1545–1549), a four-volume alphabetical bibliography that listed all the known
books printed in Latin, Greek, or Hebrew. Obviously, the dream of the total
archive is not limited to the realm of cultural memory institutions, but has a
much longer and more generalized lineage; for a contemporary exploration of
these dreams see, for instance, issue six of _Limn Magazine_ , March 2016,
. 6. As the project noted in its research summary,
“One of these barriers is the heterogeneity of information and services.
Another impediment is the lack of powerful filtering mechanisms that let users
find truly valuable information. The continuous access to information is
restricted by the unavailability of library interfaces and tools that
effectively operate on portable devices. A fourth barrier is the lack of a
solid economic infrastructure that encourages providers to make information
available, and give users privacy guarantees”; Summary of the Stanford Digital
Library Technologies Project,
. 7. Brin and Page
1998. 8. Levy 2011, 347. 9. Levy 2011, 349. 10. Levy 2011, 349. 11. Young
1988. 12. They had a hard time, however, creating a new PageRank-like
algorithm for books; see Levy 2011, 349. 13. Google Inc., “Detection of
Grooves in Scanned Images,” March 24, 2009,
[https://www.google.ch/patents/US7508978?dq=Detection+Of+Grooves+In+Scanned+Images&hl=da&sa=X&ved=0ahUKEwjWqJbV3arMAhXRJSwKHVhBD0sQ6AEIHDAA](https://www.google.ch/patents/US7508978?dq=Detection+Of+Grooves+In+Scanned+Images&hl=da&sa=X&ved=0ahUKEwjWqJbV3arMAhXRJSwKHVhBD0sQ6AEIHDAA).
14. See, for example, Jeffrey Toobin. “Google’s Moon Shot,” _New Yorker_ ,
February 4, 2007, shot>. 15. Scanners whose ghostly traces are still found in digitized books
today are evidenced by a curious little blog collecting the artful mistakes of
scanners, _The Art of Google Books_ , .
For a more thorough and general introduction to the historical relationship
between humans and machines in labor processes, see Kang 2011. 16. The
abstract from the patent reads as follows: “Systems and methods for pacing and
error monitoring of a manual page turning operator of a system for capturing
images of a bound document are disclosed. The system includes a speaker for
playing music having a tempo and a controller for controlling the tempo based
on an imaging rate and/or an error rate. The operator is influenced by the
music tempo to capture images at a given rate. Alternative or in addition to
audio, error detection may be implemented using OCR to determine page numbers
to track page sequence and/or a sensor to detect errors such as object
intrusion in the image frame and insufficient light. The operator may be
alerted of an error with audio signals and signaled to turn back a certain
number of pages to be recaptured. When music is played, the tempo can be
adjusted in response to the error rate to reduce operator errors and increase
overall throughput of the image capturing system. The tempo may be limited to
a maximum tempo based on the maximum image capture rate.” See Google Inc.,
“Pacing and Error Monitoring of Manual Page Turning Operator,” November 17,
2009, . 17. Google, “linear-book-
scanner,” _Google Code Archive_ , August 22, 2012,
. 18. The libraries of
Harvard, the University of Michigan, Oxford, Stanford, and the New York Public
Library. 19. Levy 2011, 351. 20.  _The Authors Guild et al. vs. Google, Inc._
, Class Action Complaint 05 CV 8136, United States District Court, Southern
District of New York, September 20, 2005,
/settlement-resources.attachment/authors-
guild-v-google/Authors%20Guild%20v%20Google%2009202005.pdf>. 21. As the
Authors Guild notes, “The problem is that before Google created Book Search,
it digitized and made many digital copies of millions of copyrighted books,
which the company never paid for. It never even bought a single book. That, in
itself, was an act of theft. If you did it with a single book, you’d be
infringing.” Authors Guild v. Google: Questions and Answers,
. 22.
Peters 2015, 21. 23. Hayles 2005. 24. Purdon 2016, 4. 25. Fair use constitutes
an exception to the exclusive right of the copyright holder under the United
States Copyright Act; if the use of a copyright work is a “fair use,” no
permission is required. For a court to determine if a use of a copyright work
is fair use, four factors must be considered: (1) the purpose and character of
the use, including whether such use is of a commercial nature or is for
nonprofit educational purposes; (2) the nature of the copyrighted work; (3)
the amount and substantiality of the portion used in relation to the
copyrighted work as a whole; and (4) the effect of the use upon the potential
market for or value of the copyrighted work. 26. “Do you really want … the
whole world not to have access to human knowledge as contained in books,
because you really want opt out rather than opt in?” as quoted in Levy 2011,
360. 27. “It is an astonishing opportunity to revive our cultural past, and
make it accessible. Sure, Google will profit from it. Good for them. But if
the law requires Google (or anyone else) to ask permission before they make
knowledge available like this, then Google Print can’t exist” (Farhad Manjoo,
“Indexing the Planet: Throwing Google at the Book,” _Spiegel Online
International_ , November 9, 2005, /indexing-the-planet-throwing-google-at-the-book-a-383978.html>.) Technology
lawyer Jonathan Band also expressed his support: Jonathan Band, “The Google
Print Library Project: A Copyright Analysis,” _Journal of Internet Banking and
Commerce_ , December 2005, google-print-library-project-a-copyright-analysis.php?aid=38606>. 28.
According to Patricia Schroeder, the Association of American Publishers (AAP)
President, Google’s opt-out procedure “shifts the responsibility for
preventing infringement to the copyright owner rather than the user, turning
every principle of copyright law on its ear.” BBC News, “Google Pauses Online
Books Plan,” _BBC News_ , August 12, 2005,
. 29. Professor of law,
Pamela Samuelson, has conducted numerous progressive and detailed academic and
popular analyses of the legal implications of the copyright discussions; see,
for instance, Pamela Samuelson, “Why Is the Antitrust Division Investigating
the Google Book Search Settlement?,” _Huffington Post_ , September 19, 2009,
divi_b_258997.html>; Samuelson 2010; Samuelson 2011; Samuelson 2014. 30. Levy
2011, 362; Lessig 2010; Brewster Kahle, “How Google Threatens Books,”
_Washington Post_ , May 19, 2009, dyn/content/article/2009/05/18/AR2009051802637.html>. 31. EFF, “Google Book
Search Settlement and Reader Privacy,” Electronic Frontier Foundation, n.d.,
. 32.  _The Authors Guild et
al. vs. Google Inc_., 05 Civ. 8136-DC, United States Southern District of New
York, March 22, 2011,
[http://www.nysd.uscourts.gov/cases/show.php?db=special&id=115](http://www.nysd.uscourts.gov/cases/show.php?db=special&id=115).
33. Brief of Amicus Curiae, American Library Association et al. in relation to
_The Authors Guild et al. vs. Google Inc_., 05 Civ. 8136-DC, filed on August 1
2012,
.
34. Steven Levy, “Who’s Messing with the Google Books Settlement? Hint:
They’re in Redmond, Washington,” _Wired_ , March 3, 2009,
. 35. Sergey Brin, “A Library
to Last Forever,” _New York Times_ , October 8, 2009,
. 36.  _The Authors
Guild et al. vs. Google Inc_., 05 Civ. 8136-DC, United States Southern
District of New York, March 22, 2011,
[http://www.nysd.uscourts.gov/cases/show.php?db=special&id=115](http://www.nysd.uscourts.gov/cases/show.php?db=special&id=115).
37. “Google does, of course, benefit commercially in the sense that users are
drawn to the Google websites by the ability to search Google Books. While this
is a consideration to be acknowledged in weighing all the factors, even
assuming Google’s principal motivation is profit, the fact is that Google
Books serves several important educational purposes. Accordingly, I conclude
that the first factor strongly favors a finding of fair use.” _The Authors
Guild et al. vs. Google Inc_., 05 Civ. 8136-DC, United States Southern
District of New York, November 14, 2013,
[http://www.nysd.uscourts.gov/cases/show.php?db=special&id=355](http://www.nysd.uscourts.gov/cases/show.php?db=special&id=355).
38.  _Authors Guild v. Google, Inc_., 13–4829-cv, December 16, 2015,
81c0-23db25f3b301/1/doc/13-4829_opn.pdf>. In the aftermath of Pierre Leval’s
decision the Authors Guild has yet again filed yet another petition for the
Supreme Court to reverse the appeals court decision, and has publically
reiterated the framing of Google as a parasite rather than a benefactor. A
brief supporting the Guild’s petition and signed by a diverse group of authors
such as Malcolm Gladwell, Margaret Atwood, J. M. Coetzee, Ursula Le Guin, and
Yann Martel noted that the legal framework used to assess Google knew nothing
about “the digital reproduction of copyrighted works and their communication
on the Internet or the phenomenon of ‘mass digitization’ of vast collections
of copyrighted works”; nor, they argued, was the fair-use doctrine ever
intended “to permit a wealthy for-profit entity to digitize millions of works
and to cut off authors’ licensing of their reproduction, distribution, and
public display rights.” Amicus Curiae filed on behalf of Author’s Guild
Petition, No. 15–849, February 1, 2016, content/uploads/2016/02/15-849-tsac-TAA-et-al.pdf>. 39. Oxford English
Dictionary,
[http://www.oed.com/view/Entry/40328?rskey=bCMOh6&result=1&isAdvanced=false#eid8462140](http://www.oed.com/view/Entry/40328?rskey=bCMOh6&result=1&isAdvanced=false#eid8462140).
40. The contract as we know it today developed within the paradigm of Lex
Mercatoria; see Teubner 1997. The contract is therefore a device of global
reach that has developed “mainly outside the political structures of nation-
states and international organisations for exchanges primarily in a market
economy” (Snyder 2002, 8). In the contract theory of John Locke, the
signification of contracts developed from a mere trade tool to a distinction
between the free man and the slave. Here, the societal benefits of contracts
were presented as a matter of time, where the bounded delineation of work was
characterized as contractual freedom; see Locke 2003 and Stanley 1998. 41.
Sumner 1952, 23. 42. Paul Courant, “On Being in Bed with Google,” _Au Courant_
, November 4, 2007, google>. 43. Kaufman and Ubois 2007. 44. Bottando 2012. 45. Jessamyn West,
“Google’s Slow Fade With Librarians: Maybe They’re Just Not That Into Us,”
_Medium_ , February 2, 2015, with-librarians-fddda838a0b7>. 46. Suchman 2003. The lack of research into
contracts and emotions is noted by Hillary M. Berk in her fascinating research
on contracts in the field of surrogacy: “Despite a rich literature in law and
society embracing contracts as exchange relations, empirical work has yet to
address their emotional dimensions” (Berk 2015). 47. Suchman 2003, 100. 48.
See a selection on the Public Index:
, and The Internet Archive:
. You may also find
contracts here: the University of Michigan ( /michigan-digitization-project>), the University of Cali­fornia
(), the Committee on
Institutional Cooperation ( google-agreement>), and the British Library
( google-books-and-the-british-library>), to name but a few. 49. Javier Ruiz,
“Is the Deal between Google and the British Library Good for the Public?,”
Open Rights Group, August 24, 2011, /access-to-the-agreement-between-google-books-and-the-british-library>. 50.
Kaufman and Ubois 2007. 51. Association of Research Libraries, “ARL Encourages
Members to Refrain from Signing Nondisclosure or Confidentiality Clauses,”
_ARL News_ , June 5, 2009, encourages-members-to-refrain-from-signing-nondisclosure-or-confidentiality-
clauses#.Vriv-McZdE4>. 52. Google, “About the Library Project,” _Google Books
Help,_ n.d.,
[https://support.google.com/books/partner/faq/3396243?hl=en&rd=1](https://support.google.com/books/partner/faq/3396243?hl=en&rd=1).
53. Flyverbom, Leonardi, Stohl, and Stohl 2016. 54. Levy 2011, 354. 55. Levy
2011, 352. 56. To be sure, however, the practice of secrecy is no stranger to
libraries. Consider only the closed stack that the public is never given
access to; the bureaucratic routines that are kept from the public eye; and
the historic relation between libraries and secrecy so beautifully explored by
Umberto Eco in numerous of his works. Yet, the motivations for nondisclosure
agreements on the one hand and public sector secrets on the other differ
significantly, the former lodged in a commercial logic and the latter in an
idea, however abstract, about “the public good.” 57. Belder 2015. For insight
into the societal impact of contractual regimes on civil rights regimes, see
Somers 2008. For insight into relations between neoliberalism and contracts,
see Mitropoulos 2012. 58. As engineer and historian Henry Petroski notes, for
a PPP contract to be successful a contract must be written “properly” but “the
public partners are not often very well versed in these kinds of contracts and
they don’t know how to protect themselves.” See Buckholtz 2016. 59. As argued
by Lucky Belder in “Cultural Heritage Institutions as Entrepreneurs,” 2015.
60. Borghi 2013, 92–115. 61. Stephan Heyman, “Google Books: A Complex and
Controversial Experiment,” _New York Times_ , October 28, 2015,
and-controversial-experiment.html>. 62. Google, “Library Partners,” _Google
Books_ , . 63. Andrew
Prescott, “How the Web Can Make Books Vanish,” _Digital Riffs_ , August 2013,
.
64. Pechenick, Danforth, Dodds, and Barrat 2015. 65. What Pechenik et al.
refer to here is of course the claims of Erez Aiden and Jean-Baptiste Michel
among others, who promote “culturomics,” that is, the use of huge amounts of
digital information—in this case the corpus of Google Books—to track changes
in language, culture, and history. See Aiden and Michel 2013; and Michel et
al. 2011. 66. Neubert 2008; and Weiss and James 2012, 1–3. 67. I am indebted
to Gayatri Spivak here, who makes this argument about New York in the context
of globalization; see Spivak 2000. 68. In this respect Google mirrors the
glocalization strategies of media companies in general; see Thussu 2007, 19.
69. Although the decisions of foreign legislation of course also affect the
workings of Google, as is clear from the growing body of European regulatory
casework on Google such as the right to be forgotten, competition law, tax,
etc.

# 3
Sovereign Soul Searching: The Politics of Europeana

## Introduction

In 2008, the European Commission launched the European mass digitization
project, Europeana, to great fanfare. Although the EC’s official
communications framed the project as a logical outcome of years of work on
converging European digital library infrastructures, the project was received
in the press as a European counterresponse to Google Books.1 The popular media
framings of Europeana were focused in particular on two narratives: that
Europeana was a public response to Google’s privatization of cultural memory,
and that Europeana was a territorial response to American colonization of
European information and culture. This chapter suggests that while both of
these sentiments were present in Europeana’s early years, the politics of what
Europeana was—and is—paints a more complicated picture. A closer glance at
Europeana’s social, economic, and legal infrastructures thus shows that the
European mass digitization project is neither an attempt to replicate Google’s
glocal model, nor is it a continuation of traditional European cultural
policies. Rather, Europeana produces a new form of cultural memory politics
that converge national and supranational imaginaries with global information
infrastructures.

If global information infrastructures and national politics today seemingly go
hand in hand in Europeana, it wasn’t always so. In fact, in the 1990s,
networked technologies and national imaginaries appeared to be mutually
exclusive modes of existence. The fall of the Berlin Wall in 1989 nourished a
new antisovereign sentiment, which gave way to recurring claims in the 1990s
that the age of sovereignty had passed into an age of post-sovereignty. These
claims were fueled by a globalized set of economic, political, and
technological forces, not least of which the seemingly ungovernable nature of
the Internet—which appeared to unbuckle the nation-state’s control and voice
in the process of globalization and gave rise to a sense of plausible anarchy,
which in turn made John Perry Barlow’s (in)famous ‘‘Declaration of the
Independence of Cyberspace’’ appear not as pure utopian fabulation, but rather
as a prescient diagnosis.2 Yet, while it seemed in the early 2000s that the
Internet and the cultural and economic forces of globalization had made the
notion and practice of the nation-state redundant on both practical and
cultural levels, the specter of the nation nevertheless seemed to linger.
Indeed, the nation-state continued to remain a fixed point in political and
cultural discourses. In fact, it not only lingered as a specter, but borders
were also beginning to reappear as regulatory forces. The borderless world
was, as Tim Wu and Jack Goldsmith noted in 2006, an illusion;3 geography had
revenged itself, not least in the digital environment.4

Today, no one doubts the cultural-political import of the national imaginary.
The national imaginary has fueled antirefugee movements, the surge of
nationalist parties, the EU’s intensified crisis, and the election of Donald
Trump, to name just a few critical political events in the 2010s. Yet, while
the nationalist imaginary is becoming ever stronger, paradoxically its
communicative infrastructures are simultaneously becoming ever more
globalized. Thus, globally networked digital infrastructures are quickly
supplementing, and in many cases even substituting, those national
communicative infrastructures that were instrumental in establishing a
national imagined community in the first place—infrastructures such as novels
and newspapers.5 The convergence of territorially bounded imaginaries and
global networks creates new cultural-political constellations of cultural
memory where the centripetal forces of nationalism operate alongside,
sometimes with and sometimes against, the centrifugal forces of digital
infrastructures. Europeana is a preeminent example of these complex
infrastructural and imaginary dynamics.

## A European Response

When Google announced their digitization program at the Frankfurt Book Fair in
2004, it instantly created ripples in the European cultural-political
landscape, in France in particular. Upon hearing the news about Google’s
plans, Jacques Chirac, president of France at the time, promptly urged the
then-culture minister, Renaud Donnedieu de Vabres, and Jean-Noël Jeanneney,
head of France’s Bibliothèque nationale, to commence a similar digitization
project and to persuade other European countries to join them.6 The seeds for
Europeana were sown by France, “the deepest, most sedimented reservoir of
anti-American arguments,”7 as an explicitly political reaction to Google
Books.

Europeana was thus from its inception laced with the ambiguous political
relationship between two historically competing universalist-exceptionalist
nations: the United States and France.8 A relationship that France sometimes
pictures as a question of Americanization, and at other times extends to an
image of a more diffuse Anglo-Saxon constellation. Highlighting the effects
Google Books would have on French culture, Jeanneney argued that Google’s mass
digitization efforts would pose several possible dangers to French cultural
memory such as bias in the collecting and organizing practices of Google Books
and an Anglicization of the cultural memory regulatory system. Explaining why
Google Books should be seen not only as an American, but also as an Anglo-
Saxon project, Jeanneney noted that while Google Books “was obviously an
American project,” it was nevertheless also one “that reached out to the
British.” The alliance between the Bodleian Library at Oxford and Google Books
was thus not only a professional partnership in Jeanneney’s eyes, but also a
symbolic bond where “the familiar Anglo-Saxon solidarity” manifested once
again vis-à-vis France, only this time in the digital sphere. Jeanneney even
paraphrased Churchill’s comment to Charles de Gaulle, noting that Oxford’s
alliance with Google Books yet again evidenced how British institutions,
“without consulting anyone on the other side of the English Channel,” favored
US-UK alliances over UK-Continental alliances “in search of European
patriotism for the adventure under way.”9

How can we understand Jeanneney’s framing of Google Books as an Anglo-Saxon
project and the function of this framing in his plea for a nation-based
digitization program? As historian Emile Chabal suggests, the concept of the
Anglo-Saxon mentality is a preeminently French construct that has a clear and
rich rhetorical function to strengthen the French self-understanding vis-à-vis
a stereotypical “other.”10 While fuzzy in its conceptual infrastructure, the
French rhetoric of the Anglo-Saxon is nevertheless “instinctively understood
by the vast majority of the French population” to denote “not simply a
socioeconomic vision loosely inspired by market liberalism and
multiculturalism” but also (and sometimes primarily) “an image of
individualism, enterprise, and atomization.”11 All these dimensions were at
play in Jeanneney’s anti-Google Books rhetoric. Indeed, Jeanneney suggested,
Google’s mass digitization project was not only Anglo-Saxon in its collecting
practices and organizational principles, but also in its regulatory framework:
“We know how Anglo-Saxon law competes with Latin law in international
jurisdictions and in those of new nations. I don’t want to see Anglo-Saxon law
unduly favored by Google as a result of the hierarchy that will be
spontaneously established on its lists.”12

What did Jeanneney suggest as infrastructural protection against the network
power of the Anglo-Saxon mass digitization project? According to Jeanneney,
the answer lay in territorial digitization programs: rather than simply
accepting the colonizing forces of the Anglo-Saxon matrix, Jeanneney argued, a
national digitization effort was needed. Such a national digitization project
would be a “ _contre-attaque_ ” against Google Books that should protect three
dimensions of French cultural sovereignty: its language, the role of the state
in cultural policy, and the cultural/intellectual order of knowledge in the
cultural collections.13 Thus Jeanneney suggested that any Anglo-Saxon mass
digitization project should be competed against and complemented by mass
digitization projects from other nations and cultures to ensure that cultural
works are embedded in meaningful cultural contexts and languages. While the
nation was the central base of mass digitization programs, Jeanenney noted,
such digitization programs necessarily needed to be embedded in a European, or
Continental, infrastructure. Thus, while Jeanneney’s rallying cry to protect
the French cultural memory was voiced from France, he gave it a European
signature, frequently addressing and including the rest of Europe as a natural
ally in his _contre-attaque_ against Google Books. 14 Jeanenney’s extension of
French concerns to a European level was characteristic for France, which had
historically displayed a leadership role in formulating and shaping the EU.15
The EU, Jeanneney argued, could provide a resilient supranational
infrastructure that would enable French diversity to exist within the EU while
also providing a protective shield against unhampered Anglo-Saxon
globalization.

Other French officials took on a less combative tone, insisting that the
French digitization project should be seen not merely as a reaction to Google
but rather in the context of existing French and European efforts to make
information available online. “I really stress that it’s not anti-American,”
stated one official at the Ministry of Culture and Communication. Rather than
framing the French national initiatives as a reaction to Google Books, the
official instead noted that the prime objective was to “make more material
relevant to European patrimony available,” noting also that the national
digitization efforts were neither unique nor exclusionary—not even to
Google.16 The disjunction between Jeanneney’s discursive claims to mass
digitization sovereignty and the anonymous bureaucrat’s pragmatic and
networked approach to mass digitization indicates the late-sovereign landscape
of mass digitization as it unfolded between identity politics and pragmatic
politics, between discursive claims to sovereignty and economic global
cooperation. And as the next section shows, the intertwinement of these
discursive, ideological, and economic infrastructures produced a memory
politics in Europeana that was neither sovereign nor post-sovereign, but
rather late-sovereign.

## The Infrastructural Reality of Late-Sovereignty

Politically speaking, Europeana was always more than just an empty
countergesture or emulating response to Google. Rather, as soon as the EU
adopted Europeana as a prestige project, Europeana became embedded in the
political project of Europeanization and began to produce a political logic of
its own. Latching on to (rather than countering) a sovereign logic, Europeana
strategically deployed the European imaginary as a symbolic demarcation of its
territory. But the means by which Europeana was constructed and distributed
its territorial imaginaries nevertheless took place by means of globalized
networked infrastructures. The circumscribed cultural imaginary of Europeana
was thus made interoperable with the networked logic of globalization. This
combination of a European imaginary and neoliberal infrastructure in Europeana
produced an uneasy balance between national and supranational infrastructural
imaginaries on the one hand and globalized infrastructures on the other.

If France saw Europeana primarily through the prism of sovereign competition,
the European Commission emphasized a different dispositive: economic
competition. In his 2005 response to Jaques Chirac, José Manuel Barroso
acknowledged that the digitization of European cultural heritage was an
important task not only for nation-states but also for the EU as a whole.
Instead of the defiant tone of Jeanneney and De Vabres, Barraso and the EU
institutions opted for a more neutral, pragmatic, and diplomatic mass
digitization discourse. Instead of focusing on Europeana as a lever to prop up
the cultural sovereignty of France, and by extension Europe, in the face of
Americanization, Barosso framed Europeana as an important economic element in
the construction of a knowledge economy.17

Europeana was thus still a competitive project, but it was now reframed as one
that would be much more easily aligned with, and integrated into, a global
market economy.18 One might see the difference in the French and the EU
responses as a question of infrastructural form and affordance. If French mass
digitization discourses were concerned with circumscribing the French cultural
heritage within the territory of the nation, the EC was in practice more
attuned to the networked aspects of the global economy and an accompanying
discourse of competition and potentiality. The infrastructural shift from
delineated sphere to globalized network changed the infrapolitics of cultural
memory from traditional nation-based issues such as identity politics
(including the formation of canons) to more globally aligned trade-related
themes such as copyright and public-private governance.

The shift from canon to copyright did not mean, however, that national
concerns dissipated. On the contrary, ministers from the European Union’s
member countries called for an investigation into the way Google Books handled
copyright in 2008.19 In reality, Google Books had very little to do with
Europe at that time, in the sense that Google Books was governed by US
copyright law. Yet the global reach of Google Books made it a European concern
nevertheless. Both German and French representatives emphasized the rift
between copyright legislation in the US and in EU member states. The German
government proposed that the EC examine whether Google Books conformed to
Europe’s copyright laws. In France, President Nicolas Sarkozy stated in more
flamboyant terms that he would not permit France to be “stripped of our
heritage to the benefit of a big company, no matter how friendly, big, or
American it is.”20 Both countries moreover submitted _amicus curia_ briefs 21
to judge Denny Chin (who was in charge of the ongoing Google Books settlement
lawsuit in the US22), in which they argued against the inclusion of foreign
authors in the lawsuit.23 They further brought separate suits against Google
Books for their scanning activities and sought to exercise diplomatic pressure
against the advancement of Google Books.24

On an EU level, however, the territorial concerns were sidestepped in favor of
another matrix of concern: the question of public-private governance. Thus,
despite pressure from some member states, the EC decided not to write a
similar “amicus brief” on behalf of the EU.25 Instead, EC Commissioners
McCreevy and Reding emphasized the need for more infrastructures connecting
the public and private sectors in the field of mass digitization.26 Such PPPs
could range from relatively conservative forms of cooperation (e.g., private
sponsoring, or payments from the private sector for links provided by
Europeana) to more far-reaching involvement, such as turning the management of
Europeana over to the private sector.27 In a similar vein, a report authored
by a high-level reflection group (Comité des Sages) set down by the European
Commission opened the door for public-private partnerships and also set a time
frame for commercial exploitation.28 It was even suggested that Google could
play a role in the construction of Europeana. These considerations thus
contrasted the French resistance against Google with previous statements made
by the EC, which were concerned with preserving the public sector in the
administration of Europeana.

Did the European Commission’s networked politics signal a post-sovereign
future for Europeana? This chapter suggests no: despite the EC’s strategies,
it would be wrong to label the infrapolitics of Europeana as post-sovereign.
Rather, Europeana draws up a _late-sovereign_ 29 mass digitization landscape,
where claims to national sovereignty exist alongside networked
infrastructures.30 Why not post-sovereign? Because, as legal scholar Neil
Walker noted in 2003,31 the logic of sovereignty never waned even in the face
of globalized capitalism and legal pluralism. Instead, it fused with these
more globalized infrastructures to produce a form of politics that displayed
considerable continuity with the old sovereign order, yet also had distinctive
features such as globalized trade networks and constitutional pluralisms. In
this new system, seemingly traditional claims to sovereignty are carried out
irrespective of political practices, showing that globally networked
infrastructures and sovereign imaginaries are not necessarily mutually
exclusive; rather, territory and nation continue to remain powerful emotive
forces. Since Neil Walker’s theoretical corrective to theories on post-
sovereignty, the notion of late sovereignty seems to have only gained in
relevance as nationalist imaginaries increase in strength and power through
increasingly globalized networks.

As the following section shows, Europeana is a product of political processes
that are concerned with both the construction of bounded spheres and canons
_and_ networked infrastructures of connectivity, competition, and potentiality
operating beyond, below, and between national societal structures. Europeana’s
late-sovereign framework produces an infrapolitics in which the discursive
political juxtaposition between Europeana and Google Books exists alongside
increased cooperation between Google Books and Europeana, making it necessary
to qualify the comparative distinctions in mass digitization projects on a
much more detailed level than merely territorial delineations, without,
however, disposing of the notion of sovereignty. The simultaneous
contestations and connections between Europeana and Google Books thus make
visible the complex economic, intellectual, and technological infrastructures
at play in mass digitization.

What form did these infrastructures take? In a sense, the complex
infrastructural set-up of Europeana as it played out in the EU’s framework
ended up extending along two different axes: a vertical axis of national and
supranational sovereignty, where the tectonic territorial plates of nation-
states and continents move relative to each other by converging, diverging,
and transforming; and a horizontal axis of deterritorializing flows that
stream within, between, and throughout sovereign territories consisting both
of capital interests (in the form of transnational lobby organizations working
to protect, promote, and advance the interests of multinational companies or
nongovernmental organizations) and the affective relations of users.

## Harmonizing Europe: From Canon to Copyright

Even if the EU is less concerned with upholding the regulatory boundaries of
the nation-state in mass digitization, bordering effects are still found in
mass digitized collections—this time in the form of copyright regulation. As
in the case of Google Books, mass digitization also raised questions in Europe
about the future role of copyright in the digital sphere. On the one hand,
cultural industries were concerned about the implications of mass digitization
for their production and copyrights32; on the other hand, educational
institutions and digital industries were interested in “unlocking” the
cognitive and cultural potentials that resided within the copyrighted
collections in cultural heritage institutions. Indeed, copyright was such a
crucial concern that the EC repeatedly stated the necessity to reform and
harmonize European copyright regulation across borders.

Why is copyright a concern for Europeana? Alongside economic challenges, the
current copyright legislation is _the_ greatest obstacle against mass
digitization. Copyright effectively prohibits mass digitization of any kind of
material that is still within copyright, creating large gaps in digitized
collections that are often referred to as “the twentieth-century black hole.”
These black holes appear as a result of the way European “copyright interacts
with the digitization of cultural heritage collections” and manifest
themselves as “marked lack of online availability of twentieth-century
collections.” 33 The lack of a common copyright mechanism not only hinders
online availability, but also challenges European cross-border digitization
projects as well as the possibilities for data-mining collections à la Google
because of the difficulties connected to ascertaining the relevant
public domain and hence definitively flagging the public domain status of an
object.34

While Europeana’s twentieth-century black hole poses a problem, Europe would
not, as one worker in the EC’s Directorate-General (DG) Copyright unit noted,
follow Google’s opt-out mass digitization strategy because “the European
solution is not the Google solution. We do a diligent search for the rights
holder before digitizing the material. We follow the law.”35 By positioning
herself as on the right side of the law, the DG employee implicitly also
placed Google on the wrong side of the law. Yet, as another DG employee
explained with frustration, the right side of the law was looking increasingly
untenable in an age of mass digitization. Indeed, as she noted, the demands
for diligent search was making her work near impossible, not least due to the
different legal regimes in the US and the EU:

> Today if one wants to digitize a work, one has to go and ask the rights
holder individually. The problem is often that you can’t find the rights
holder. And sometimes it takes so much time. So there is a rights holder, you
know that he would agree, but it takes so much time to go and find out. And
not all countries have collective management … you have to go company by
company. In Europe we have producing companies that disappear after the film
has been made, because they are created only to make that film. So who are you
going to ask? While in the States the situation is different. You have the
majors, they have the rights, you know who to ask because they are very
stable. But in Europe we have this situation, which makes it very difficult,
the cultural access to cultural heritage. Of course we dream of changing
this.36

The dream is far from realized, however. Since the EU has no direct
legislative competence in the area of copyright, Europeana is the center of a
natural tension between three diverging, but sometimes overlapping instances:
the exclusivity of national intellectual property laws, the economic interests
toward a common market, and the cultural interests in the free movement of
information and knowledge production—a tension that is further amplified by
the coexistence of different legal traditions across member states.37 Seeking
to resolve this tension, the European Parliament and certain units in the
European Commission have strategically used Europeana as a rhetorical lever to
increase harmonization of copyright legislation and thus make it easier for
institutions to make their collections available online.38 “Harmonization” has
thus become a key concept in the rights regime of mass digitization,
essentially signaling interoperability rather than standardization of national
copyright regimes. Yet stakeholders differ in their opinions concerning who
should hold what rights over what content, over what period of time, at what
price, and how things should be made available. So within the process of
harmonization is a process that is less than harmonious, namely bringing
stakeholders to the table and committing. As the EC interviewee confirms,
harmonization requires not only technical but also political cooperation.

The question of harmonization illustrates the infrapolitical dimensions of
Europeana’s copyright systems, showing that they are not just technical
standards or “direct mirrors of reality” but also “co-produced responses to
technoscientific and political uncertainty.”39 The European attempts to
harmonize copyright standards across national borders therefore pit not only
one technical standard against the other, but also “alternative political
cultures and their systems of public reasoning against one another”40
(Jasanoff, 133). Harmonization thus compresses, rather than eliminates,
national varieties within Europe.41 Hence, Barroso’s vision of Europeana as a
collective _European_ cultural memory is faced with the fragmented patterns of
national copyright regimes, producing if not overtly political borders in the
collections, then certainly infrapolitical manifestations of the cultural
barriers that still exist between European countries.

## The Infrapolitics of Interoperability

Copyright is not the only infrastructural regime that upholds borders in
Europeana’s collections; technical standards also pose great challenges for
the dream of an European connective cultural memory.42 The notion of
_interoperability_ 43 has therefore become a key concern for mass
digitization, as interoperability is what allows digitized cultural memory
institutions to exchange and share documents, queries, and services.44

The rise of interoperability as a key concept in mass digitization is a side-
effect of the increasing complexity of economic, political, and technological
networks. In the twentieth century, most European cultural memory institutions
existed primarily as small “sovereign” institutions, closed spheres governed
by internal logics and with little impetus to open up their internal machinery
to other institutions and cooperate. The early 2000s signaled a shift in the
institutional infrastructural layout of cultural memory institutions, however.
One early significant articulation of this shift was a 324-page European
Commission report entitled _Technological Landscapes for Tomorrow’s Cultural
Economy: Unlocking the Value of Cultural Heritage_ (or the DigiCULT study), a
“roadmap” that outlined the political, organizational, and technological
challenges faced by European museums, libraries, and archives in the period
2002–2006. A central passage noted that the “conditions for success of the
cultural and memory institutions in the Information Society is (sic) the
‘network logic,’ a logic that is of course directly related to the necessity
of being interoperable.” 45 The network logic and resulting demand for
interoperability was not merely a question of digital connections, the report
suggested, but a more pervasive logic of contemporary society. The report thus
conceived interoperability as a question that ran deeper that technological
logic.46 The more complex cultural memory infrastructures become, the more
interoperability is needed if one wants the infrastructures to connect and
communicate with each other.47 As information scholar Christine Borgman notes,
interoperability has therefore long been “the holy grail of digital
libraries”—a statement echoed by Commissioner Reding on Europeana in 2005 when
she stated that “I am not suggesting that the Commission creates a single
library. I envisage a network of many digital libraries—in different
institutions, across Europe.”48 Reding’s statement shows that even at the
height of the French exceptionalist discourse on European mass digitization,
other political forces worked instead to reformat the sovereign sphere into a
network. The unravelling of the bounded spheres of cultural memory
institutions into networked infrastructures is therefore both an effect of,
and the further mobilization of, increased interoperability.

Interoperability is not only a concern for mass digitization projects,
however; rather, the calls for interoperability takes place on a much more
fundamental level. A European Council Conclusion on Europeana identifies
interoperability as a key challenge for the future construction of Europeana,
but also embeds this concern within the overarching European interoperability
strategy, _European Interoperability Framework for pan-European eGovernment
services_. 49 Today, then, interoperability appears to be turning into a
social theory. The extension of the concept of interoperability into the
social sphere naturally follows the socialization of another technical term:
infrastructure. In the past decades, Susan Leigh Star, Geoffrey Bowker, and
others have successfully managed to frame infrastructure “not only in terms of
human versus technological components but in terms of a set of interrelated
social, organizational, and technical components or systems (whether the data
will be shared, systems interoperable, standards proprietary, or maintenance
and redesign factored in).”50 It follows, then, as Christine Borgman notes,
that even if interoperability in technical terms is a “feature of products and
services that allows the connection of people, data, and diverse systems,”51
policy practice, standards and business models, and vested interest are often
greater determinants of interoperability than is technology.52 In similar
terms, information science scholar Jerome Mcdonough notes that “we need to
cease viewing [interoperability] purely as a technical problem, and
acknowledge that it is the result of the interplay of technical and social
factors.”53 Pushing the concept of interoperability even further, legal
scholars Urs Gasser and John Palfrey have even argued for viewing the world
through a theory of interoperability, naming their project “interop theory,”54
while Internet governance scholar Laura Denardis proposes a political theory
of interoperability.55

More than denoting a technical fact, then, interoperability emerges today as
an infrastructural logic, one that promotes openness, modularity, and
connectivity. Within the field of mass digitization, the notion of
interoperability is in particular promoted by the infrastructural workers of
cultural memory (e.g., archivists, librarians, software developers, digital
humanists, etc.) who dream of opening up the silos they work on to enrich them
with new meanings.56 As noted in chapter 1, European cultural memory
institutions had begun to address unconnected institutions as closed “silos.”
Mass digitization offered a way of thinking of these institutions anew—not as
frigid closed containers, but rather as vital connective infrastructures.
Interoperability thus gives rise to a new infrastructural form of cultural
memory: the traditional delineated sovereign spheres of expertise of analog
cultural memory institutions are pried open and reformatted as networked
ecosystems that consist not only of the traditional national public providers,
but also of additional components that have hitherto been alien in the
cultural memory industry, such as private individual users and commercial
industries.57

The logic of interoperability is also born of a specific kind of
infrapolitics: the politics of modular openness. Interoperability is motivated
by the “open” data movements that seek to break down proprietary and
disciplinary boundaries and create new cultural memory infrastructures and
ways of working with their collections. Such visions are often fueled by
Lawrence Lessig’s conviction that “the most important thing that the Internet
has given us is a platform upon which experience is interoperable.”58 And they
have given rise to the plethora of cultural concepts we find on the Internet
in the age of digital capitalism, such as “prosumers”, “produsers”, and so on.
These concepts are becoming more and more pervasive in the digital environment
where “any format of sound can be mixed with any format of video, and then
supplemented with any format of text or images.”59 According to Lessig, the
challenge to this “open” vision are those “who don’t play in this
interoperability game,” and the contestation between the “open” and the
“closed” takes place in the “the network,” which produces “a world where
anyone can clip and combine just about anything to make something new.”60

Despite its centrality in the mass digitization rhetoric, the concept of
interoperability and the politics it produces is rarely discussed in critical
terms. Yet, as Gasser and Palfrey readily conceded in 2007, interoperability
is not necessarily in itself an “unalloyed good.” Indeed, in “certain
instances,” Palfrey and Gasser noted, interoperability brings with it possible
drawbacks such as increased homogeneity, lack of security, lack of
reliability.61 Today, ten years on, Urs Gasser’s and John Palfrey’s admissions
of the drawbacks of interoperability appear too modest, and it becomes clear
that while their theoretical apparatus was able to identify the centrality of
interoperability in a digital world, their social theory missed its larger
political implications.

When scanning the literature and recommendations on interoperability, certain
words emerge again and again: innovation, choice, diversity, efficiency,
seamlessness, flexibility, and access. As Tara McPherson notes in her related
analysis of the politics of modularity, it is not much of a stretch to “layer
these traits over the core tenets of post-Fordism” and note their effect on
society: “time-space compression, transformability, customization, a
public/private blur, etc.”62 The result, she suggests, is a remaking of the
Fordist standardization processes into a “neoliberal rule of modularity.”
Extending McPherson’s critique into the temporal terrain, Franco Bifo Berardi
emphasizes the semantic politics of speed that is also inherent in
connectivity and interoperability: “Connection implies smooth surfaces with no
margins of ambiguity … connections are optimized in terms of speed and have
the potential to accelerate with technological developments.63 The
connectivity enabled by interoperability thus implies modularity with
components necessarily “open to interfacing and interoperability.”
Interoperability, then, is not only a question of openness, but also a way of
harnessing network effects by means of speed and resilience.

While interoperability may be an inherent infrastructural tenet of neoliberal
systems, increased interoperability does not automatically make mass
digitization projects neoliberal. Yet, interoperability does allow for
increased connectivity between individual cultural memory objects and a
neoliberal economy. And while the neoliberal economy may emulate critical
discourses on freedom and creativity, its main concern is profit. The same
systems that allow users to create and navigate collections more freely are
made interoperable with neoliberal systems of control.64

## The “Work” in Networking

What are the effects of interoperability for the user? The culture of
connectivity and interoperability has not only allowed Europeana’s collections
to become more visible to a wider public, it has also enabled these publics to
become intentionally or unintentionally involved in the act of describing and
ordering these same collections, for instance by inviting users to influence
existing collections as well as to generate their own collections. The
increased interaction with works also transform them from stable to mobile
objects.65 Mass digitization has thus transformed curatorial practice,
expanding it beyond the closed spheres of cultural memory institutions into
much broader ecosystems and extending the focus of curatorial attention from
fixed objects to dynamic network systems. As a result, “curatorial work has
become more widely distributed between multiple agents including technological
networks and software.”66 From having played a central role in the curatorial
practice, the curator is now only part of this entire system and increasingly
not central to it. Sharing the curator’s place are users, algorithms, software
engineers, and a multitude of other factors.

At the same time, the information deluge generated by digitization has
enhanced the necessity of curation, both within and outside institutions. Once
considered as professional caretaking for collections, the curatorial concept
has now been modulated to encompass a whole host of activities and agents,
just as curatorial practices are now ever more engaged in epistemic meaning
making, selecting and organizing materials in an interpretive framework
through the aggregation of global connection.67 And as the already monumental
and ever accelerating digital collections exceed human curatorial capacity,
the computing power of machines and cognitive capabilities of ordinary
citizens is increasingly needed to penetrate and make meaning of the data
accumulations.

What role is Europeana’s user given in this new environment? With the
increased modulation of public-private boundaries, which allow different
modules to take on different tasks and on different levels, the strict
separation between institution and environment is blurring in Europeana. So is
the separation between user, curator, consumer, and producer. New characters
have thus arisen in the wake of these transformations, hereunder the two
concepts of the “amateur” and the “citizen scientist.”

In contrast to much of the microlabor that takes place in the digital sphere,
Europeana’s participatory structures often consist in cognitive tasks that are
directly related to the field of cultural memory. This aligns with the
aspirations of the Citizen Science Alliance, which requires that all their
crowdsourcing projects answer “a real scientific research question” and “must
never waste the ‘clicks,’ or time, of volunteers.”68 Citizen science is an
emergent form of research practice in which citizens participate in research
projects on different levels and in different constellations with established
research communities. The participatory structures of citizen science range
from highly complex processes to more simple tasks, such as identifying
colors, themes, patterns that challenge machinic analyses, and so on. There
are different ways of classifying these participatory structures, but the most
prevalent participatory structures in Europeana include:

1. 1\. Contribution, where visitors are solicited to provide limited and specified objects, actions, or ideas to an institutionally controlled process, for example, Europeana’s _1914–1918_ exhibition, which allowed (and still allows) users to contribute photos, letters, and other memorabilia from that period.
2. 2\. Correction and transcription, where users correct faulty OCR scans of books, newspapers, etc.
3. 3\. Contextualization, that is, the practice of placing or studying objects in a meaningful context.
4. 4\. Augmenting collections, that is, enriching collections with additional dimensions. One example is the recently launched Europeana Sound Connections, which encourages and enables visitors to “actively enrich geo-pinned sounds from two data providers with supplementary media from various sources. This includes using freely reusable content from Europeana, Flickr, Wikimedia Commons, or even individuals’ own collections.”69
5. 5\. And finally, Europeana also offers participation through classification, that is, a social tagging system in which users contribute with classifications.

All these participatory structures fall within the general rubric of
crowdsourcing, and they are often framed in social terms and held up as an
altruistic alternative to the capitalist exploitation of other crowdsourcing
projects, because, as new media theorist Mia Ridge argues, “unlike commercial
crowdsourcing, participation in cultural memory crowdsourcing is driven by
pleasure, not profit. Rather than monetary recompense, GLAM (Galleries,
Museums, Archives, and Libraries) projects provide an opportunity for
altruistic acts, activated by intrinsic motivations, applied to inherently
engaging tasks, encouraged by a personal interest in the subject or task.”70
In addition—and based on this notion of altruism—these forms of crowdsourcing
are also subversive successors of, or correctives to, consumerism.

The idea of pitting the activities of citizen science against more simple
consumer logics has been at the heart of Europeana since its inception,
particularly influenced by the French philosopher Bernard Stiegler, who has
been instrumental not only in thinking about, but also building, Europeana’s
software infrastructures around the character of the “amateur.” Stiegler’s
thesis was that the amateur could subvert the industrial ethos of production
because he/she is not driven by a desire to consume as much as a desire to
love, and thus is able to imbue the archive with a logic different from pure
production71 without withdrawing from participation (the word “amateur” comes
from the French word _aimer_ ).72 Yet it appears to me that the convergence of
cultural memory ecosystems leaves little room for the philosophical idea of
mobilizing amateurism as a form of resistance against capitalist logics.73 The
blurring of production boundaries in the new cultural memory ecosystems raises
urgent questions to cultural memory institutions of how they can protect the
ethos of the amateur in citizen archives,74 while also aligning them with
institutional strategies of harvesting the “cognitive surplus” of users75 in
environments where play is increasingly taking on aspects of labor and vice
versa. As cultural theorist Angela Mitropoulos has noted, “networking is also
net-working.”76 Thus, while many of the participatory structures we find in
Europeana are participatory projects proper and not just what we might call
participation-lite—or minimal participation77—models, the new interoperable
infrastructures of cultural memory ecosystems make it increasingly difficult
to uphold clear-cut distinctions between civic practice and exploitation in
crowdsourcing projects.

## Collecting Europe

If Europeana is a late-sovereign mass digitization project that maintains
discursive ties to the national imaginary at the same time that it undercuts
this imaginary by means of networked infrastructures through increased
interoperability, the final question is: what does this late-sovereign
assemblage produce in cultural terms? As outlined above, it was an aspiration
of Europeana to produce and distribute European cultural memory by means of
mass digitization. Today, its collection gathers more than 50 million cultural
works in differing formats—from sound bites to photographs, textiles, films,
files, and books. As the previous sections show, however, the processes of
gathering the cultural artifacts have generated a lot of friction, producing a
political reality that in some respects reproduces and accentuates the
existing politics of cultural memory institutions in terms of representation
and ownership, and in other respects gives rise to new forms of cultural
memory politics that part ways with the political regimes of traditional
curatorial apparatuses.

The story of how Europeana’s initial collection was published and later
revised offers a good opportunity to examine its late-sovereign political
dynamics. Europeana launched in 2008, giving access to some 4.5 million
digital objects from more than 1,000 institutions. Shortly after its launch,
however, the site crashed for several hours. The reason given by EU officials
was that Europeana was a victim of its own success: “On the first day of its
launch, Europe’s digital library Europeana was overwhelmed by the interest
shown by millions of users in this new project … thousands of users searching
in the very same second for famous cultural works like the _Mona Lisa_ or
books from Kafka, Cervantes, or James Joyce. … The site was down because of
massive interest, which shows the enormous potential of Europeana for bringing
cultural treasures from Europe’s cultural institutions to the wide public.” 78
The truth, however, lay elsewhere. As a Europeana employee explained, the site
didn’t buckle under the enormous interest shown in it, but rather because
“people were hitting the same things everywhere.” The problem wasn’t so much
the way they were hitting on material, but _what_ they were hitting; the
Europeana employee explained that people’s search terms took the Commission by
surprise, “even hitting things the Commission didn’t want to show. Because
people always search for wrong things. People tend to look at pornographic and
forbidden material such as _Mein Kampf_ , etc.”79 Europeana’s reaction was to
shut down and redesign Europeana’s search interface. Europeana’s crash was not
caused by user popularity, but rather was caused by a decision made by the
Commission and Europeana staff to rework the technical features of Europeana
so that the most popular searches would not be public and to remove
potentially politically contentious material such as _Mein Kampf_ and nude
works by Peter Paul Rubens and Abraham Bloemaert, among others. Another
Europeana employee explained that the launch of Europeana had been forced
through before its time because of a meeting among the cultural ministers in
Europe, making it possible to display only a prototype. This beta version was
coded to reveal the most popular searches, producing a “carousel” of the same
content because, as the previous quote explains, people would search for the
same things, in particular “porn” and “ _Mein Kampf_ ,” allegedly leading the
US press to call Europeana a collection of fascist and porn material.

On a small scale, Europeana’s early glitch highlighted the challenge of how to
police the incoming digital flows from national cultural heritage institutions
for in-copyright works. With hundreds of different institutions feeding
hundreds of thousands of texts, images, and sounds into the portal, scanning
the content for illegal material was an impossible task for Europeana
employees. Many in-copyright works began flooding the portal. One in-copyright
work that appeared in the portal stood out in particular: Hitler’s _Mein
Kampf_. A common conception has been that _Mein Kampf_ was banned after WWII.
The truth was more complicated and involved a complex copyright case. When
Hitler died, his belongings were given to the state of Bavaria, including his
intellectual property rights to _Mein Kampf_. Since Hitler’s copyright was
transferred as part of the Allies’ de-Nazification program, the Bavarian state
allowed no one to republish the book. 80 Therefore, reissues of _Mein Kampf_
only reemerged in 2015, when the copyright was released. The premature digital
distribution of _Mein Kampf_ in Euro­peana was thus, according to copyright
legislation, illegal. While the _Mein Kampf_ case was extraordinary, it
flagged a more fundamental problem of how to police and analyze all the
incoming data from individual cultural heritage institutions.

On a more fundamental level, however, _Mein Kampf_ indicated not only a legal,
but also a political, issue for Europeana: how to deal with the expressions
that Europeana’s feedback mechanisms facilitated. Mass digitization promoted a
new kind of cultural memory logic, namely of feedback. Feedback mechanisms are
central to data-driven companies like Google because they offer us traces of
the inner worlds of people that would otherwise never appear in empirical
terms, but that can be catered to in commercial terms. 81 Yet, while the
traces might interest the corporation (or sociologist) on the hunt for
people’s hidden thoughts, a prestige project such as Europeana found it
untenable. What Europeana wanted was to present Europe’s cultural memory; what
they ended up showing was Europeans’ intense fascination with fascism and
porn. And this was problematic because Europeana was a political project of
representation, not a commercial project of capture.82

Since its glitchy launch, Europeana has refined its interface techniques, is
becoming more attuned to network analytics, and has grown exponentially both
in terms of institutional and in material scope. There are, at the time of
this writing, more than 50 million items in Europeana, and while its numbers
are smaller than Google Books, its scope is much larger, including images,
texts, sounds, videos, and 3-D objects. The platform features carefully
curated exhibitions highlighting European themes, from generalized exhibitions
about World War I and European artworks to much more specialized exhibitions
on, for instance, European cake culture.

But how is Europe represented in statistical terms? Since Europeana’s
inception, there have been huge variances in how much each nation-state
contributes to Europeana.83 So while Europeana is in principle representing
Europe’s collective cultural memory, in reality it represents a highly
fragmented image of Europe with a lot of European countries not even appearing
in the databases. Moreover, even these numbers are potentially misleading, as
one information scholar formerly working with Europeana notes: to pump up
their statistical representation, many institutions strategically invented
counting systems that would make their representation seem bigger than it
really is, for example, by declaring each scanned page in a medieval
manuscript as an object instead of as the entire work.84 The strategic acts of
volume increase are interesting mass digitization phenomena for many reasons:
first, they reveal the ultimately volume-based approach of mass digitization.
According to the scholar, this volume-based approach finds a political support
in the EC system, for whom “the object will always be quantitative” since
volume is “the only thing the commission can measure in terms of funding and
result.”85 In a way then, the statistics tell more than one story: in
political terms, they recount not only the classic tale of a fragmented Europe
but also how Europe is increasingly perceived, represented, and managed by
calculative technologies. In technical terms, they reveal the gray areas of
how to delineate and calculate data: what makes a data object? And in cultural
policy terms, they reflect the highly divergent prioritization of mass
digitization in European countries.

The final question is, then: how is this fragmented European collection
distributed? This is the point where Europeana’s territorial matrix reveals
its ultimately networked infrastructure. Europeana may be entered through
Google, Facebook, Twitter, and Pinterest, and vice versa. Therefore a click on
the aforementioned cake exhibition, for example, takes one straight to Google
Arts and Culture. The transportation from the Europeana platform to Google
happens smoothly, without any friction or notice, and if one didn’t look at
the change in URL, one would hardly notice the change at all since the
interface appears almost similar. Yet, what are the implications of this
networked nature? An obvious consequence is that Europeana is structurally
dependent on the social media and search engine companies. According to one
Europeana report, Google is the biggest source of traffic to the Europeana
portal, accounting for more than 50 percent of visits. Any changes in Google’s
algorithm and ranking index therefore significantly impact traffic patterns on
the Europeana portal, which in turn affects the number of Europeana pages
indexed by Google, which then directly impacts on the number of overall visits
to the Europeana portal.86 The same holds true for Facebook, Pinterest,
Google+, etc.

Held together, the feedback mechanisms, the statistical variance, and the
networked infrastructures of Europeana show just how difficult it is to
collect Europe in the digital sphere. This is not to say that territorial
sentiments don’t have power, however—far from it. Within the digital sphere we
are already seeing territorial statements circulated in Europe on both
national and supranational scales, with potentially far-reaching implications
on both. Yet, there is little to suggest that the territorial sentiments will
reproduce sovereign spheres in practice. To the extent that reterritorializing
sentiments are circulated in globalizing networks, this chapter has sought to
counter both ideas about post sovereignty and pure nationalization, viewing
mass digitization instead through the lens of late-sovereignty. As this
chapter shows, the notion of late-sovereignty allows us to conceptualize mass
digitization programs, such as Europeana, as globalized phenomena couched
within the language of (supra)national sovereignty. In the age where rampant
nationalist movements sweep through globalized communication networks, this
approach feels all the more urgent and applicable not only to mass
digitization programs, but also to reterritorializing communication phenomena
more broadly. Only if we take the ways in which the nationalist imaginary
works in the infrastructural reality of late capitalism, can we begin to
account for the infrapolitics of the highly mediated new territorial
imaginaries.

## Notes

1. Lefler 2007; Henry W., “Europe’s Digital Library versus Google,” Café
Babel, September 22, 2008, /europes-digital-library-versus-google.html>; Chrisafis 2008. 2. While
digitization did not stand apart from the political and economic developments
in the rapidly globalizing world, digital theorists and activists soon gave
rise to the Internet as an inherent metaphor for this integrative development,
a sign of the inevitability of an ultimately borderless world, where as
Negroponte notes, time zones would “probably play a bigger role in our digital
future than trade zones” (Negroponte 1995, 228). 3. Goldsmith and Wu 2006. 4.
Rogers 2012. 5. Anderson 1991. 6. “Jacques Chirac donne l’impulsion à la
création d’une bibliothèque numérique,” _Le Monde_ , March 16, 2005,
donne-l-impulsion-a-la-creation-d-une-bibliotheque-
numerique_401857_3246.html>. 7. Meunier 2007. 8. As Sophie Meunier reminds us,
the _Ursprung_ of the competing universalisms can be located in the two
contemporary revolutions that lent legitimacy to the universalist claims of
both the United States and France. In the wake of the revolutions, a perceived
competition arose between these two universalisms, resulting in French
intellectuals crafting anti-American arguments, not least when French
imperialism “was on the wane and American imperialism on the rise.” See
Meunier 2007, 141. Indeed, Muenier suggests, anti-Americanism is “as much a
statement about France as it is about America—a resentful longing for a power
that France no longer has” (ibid.). 9. Jeanneney 2007, 3. 10. Emile Chabal
thus notes how the term is “employed by prominent politicians, serious
academics, political commentators, and in everyday conversation” to “cover a
wide range of stereotypes, pre-conceptions, and judgments about the Anglo-
American world” (Chabal 2013, 24). 11. Chabal 2013, 24–25. 12. Jeanneney 2007.
13. While Jeanneney framed this French cultural-political endeavor as a
European “contre-attaque” against Google Books, he also emphasized that his
polemic was not at all to be read as a form of aggression. In particular he
pointed to the difficulties of translating the word _défie_ , which featured
in the title of the piece: “Someone rightly pointed out that the English word
‘defy,’ with which American reporters immediately rendered _défie,_ connotes a
kind of violence or aggressiveness that isn’t implied by the French word. The
right word in English is ‘challenge,’ which has a different implication, more
sporting, more positive, more rewarding for both sides” (Jeanneney 2007, 85).
14. See pages 12, 22, and 24 for a few examples in Jeanneney 2007. 15. On the
issue of the common currency, see, for instance, Martin and Ross 2004. The
idea of France as an appropriate spokesperson for Europe was familiar already
in the eighteenth century when Voltaire declared French “la Langue de
l’Europe”; see Bivort 2013. 16. The official thus first noted that, “Everybody
is working on digitization projects … cooperation between Google and the
European project could therefore well occur.” and later added that ”The worst
scenario we could achieve would be that we had two big digital libraries that
don’t communicate. … The idea is not to do the same thing, so maybe we could
cooperate, I don’t know. Frankly, I’m not sure they would be interested in
digitizing our patrimony. The idea is to bring something that is
complementary, to bring diversity. But this doesn’t mean that Google is an
enemy of diversity.” See Labi 2005. 17. Letter from Manuel Barroso to Jaques
Chirac, July 7, 2005,
[http://www.peps.cfwb.be/index.php?eID=tx_nawsecuredl&u=0&file=fileadmin/sites/numpat/upload/numpat_super_editor/numpat_editor/documents/Europe/Bibliotheques_numeriques/2005.07.07reponse_de_la_Commission_europeenne.pdf&hash=fe7d7c5faf2d7befd0894fd998abffdf101eecf1](http://www.peps.cfwb.be/index.php?eID=tx_nawsecuredl&u=0&file=fileadmin/sites/numpat/upload/numpat_super_editor/numpat_editor/documents/Europe/Bibliotheques_numeriques/2005.07.07reponse_de_la_Commission_europeenne.pdf&hash=fe7d7c5faf2d7befd0894fd998abffdf101eecf1).
18. As one EC communication noted, a digitization project on the scale of
Europeana could sharpen Europe’s competitive edge in digitization processes
compared to those in the US as well India and China; see European Commission,
“i2010: Digital Libraries,” _COM(2005) 465 final_ , September 30, 2005, [eur-
lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52005DC0465&from=EN](http
://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52005DC0465&from=EN).
19. “Google Books raises concerns in some member states,” as an anonymous
Czech diplomatic source put it; see Paul Meller, “EU to Investigate Google
Books’ Copyright Policies,” _PCWorld_ , May 28, 2009,
.
20. Pfanner 2011; Doward 2009; Samuel 2009. 21. Amicus brief is a legal term
that in Latin means “friend of the court.” Frequently, a person or group who
is not a party to a lawsuit, but has a strong interest in the matter, will
petition the court for permission to submit a brief in the action with the
intent of influencing the court’s decision. 22. See chapter 4 in this volume.
23. de la Durantaye 2011. 24. Kevin J. O’Brien and Eric Pfanner, “Europe
Divided on Google Book Deal,” _New York Times_ , August 23, 2009,
; see
also Courant 2009; Darnton 2009. 25. de la Durantaye 2011. 26. Viviane Reding
and Charlie McCreevy, “It Is Time for Europe to Turn over a New E-Leaf on
Digital Books and Copyright,” MEMO/09/376, September 7, 2009, [europa.eu/rapid
/press-release_MEMO-09-376_en.htm?locale=en](http://europa.eu/rapid/press-
release_MEMO-09-376_en.htm?locale=en). 27. European Commission,
“Europeana—Next Steps,” COM(2009) 440 final, August 28, 2009, [eur-
lex.europa.eu/LexUriServ/LexUriServ.do?uri=COM:2009:0440:FIN:en:PDF](http
://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=COM:2009:0440:FIN:en:PDF).
28. “It is logical that the private partner seeks a period of preferential use
or commercial exploitation of the digitized assets in order to avoid free-
rider behaviour of competitors. This period should allow the private partner
to recoup its investment, but at the same time be limited in time in order to
avoid creating a one-market player situation. For these reasons, the Comité
set the maximum time of preferential use of material digitised in public-
private partnerships at maximum 7 years” (Niggemann 2011). 29. Walker 2003.
30. Within this complex environment it is not even possible to draw boundaries
between the networked politics of the EU and the sovereign politics of member
states. Instead, member states engage in double-talk. As political scientist
Sophie Meunier reminds us, even member states such as France engage in double-
talk on globalization, with France on the one hand becoming the “worldwide
champion of anti-globalization,” and on the other hand “a country whose
economy and society have quietly adapted to this much-criticized
globalization” (Meunier 2003). On political two-level games, see also Putnam
1988. 31. Walker 2003. 32. “Google Books Project to Remove European Titles,”
_Telegraph_ , September 7, 2009,
remove-European-titles.html>. 33. “Europeana Factsheet,” Europeana, September
28, 2015,
/copy-of-europeana-policy-illustrating-the-20th-century-black-hole-in-the-
europeana-dataset.pdf> . 34. C. Handke, L. Guibault, and J. J. Vallbé, “Is
Europe Falling Behind in Data Mining? Copyright’s Impact on Data Mining in
Academic Research,” 2015, id-12015-15-handke-elpub2015-paper-23>. 35. Interview with employee, DG
Copyright, DC Commission, 2010. 36. Interview with employee, DG Information
and Society, DC Commission, 2010. 37. Montagnani and Borghi 2008. 38. Julia
Fallon and Paul Keller, “European Parliament Demands Copyright Rules that
Allow Cultural Heritage Institutions to Share Collections Online,” Europeana
Pro, rules-better-fit-for-a-digital-age>. 39. Jasanoff 2013, 133 40. Ibid. 41. Tate
2001. 42. It would be tempting to suggest the discussion on harmonization
above would apply to interoperability as well. But while the concepts of
harmonization and interoperability—along with the neighboring term
standardization—are used intermittently and appear similar at first glance,
they nevertheless have precise cultural-legal meanings and implicate different
infrastructural set-ups. As noted above, the notion of harmonization is
increasingly used in the legal context of harmonizing regulatory
apparatuses—in the case of mass digitization especially copyright laws. But
the word has a richer semantic meaning, suggesting a search for commonalities,
literally by means of fitting together or arranging units into a whole. As
such the notion of harmony suggests something that is both pleasing and
presupposes a cohesive unit(y), for example, a door hinged to a frame, an arm
hinged to a body. While used in similar terms, the notion of interoperability
expresses a very different infrastructural modality. If harmonization suggests
unity, interoperability rather alludes to modularity. For more on the concepts
of standardization and harmonization in regulatory contexts, see Tay and
Parker 1990. 43. The notion of interoperability is often used to express a
system’s ability to transfer, render and connect to useful information across
systems, and calls for interoperability have increased as systems have become
increasingly complex. 44. There are “myriad technical and engineering issues
associated with connecting together networks, databases, and other computer-
based systems”; digitized cultural memory institutions have the option of
providing “a greater array of services” than traditional libraries and
archives from sophisticated search engines to document reformatting as rights
negotiations; digitized cultural memory materials are often more varied than
the material held in traditional libraries; and finally and most importantly,
mass digitization institutions are increasingly becoming platforms that
connect “a large number of loosely connected components” because no “single
corporation, professional organization, or government” would be able to
provide all that is necessary for a project such as Europeana; not least on an
international scale. EU-NSF Digital Library Working Group on Interoperability
between Digital Libraries Position Paper, 1998,
. 45.  _The
Digicult Report: Technological Landscapes for Tomorrow’s Cultural Economy:
Unlocking the Value of Cultural Heritage: Executive Summary_ (Luxembourg:
Office for Official Publications of the European Communities, 2002), 80. 46.
“… interoperability in organisational terms is not foremost dependent on
technologies,” ibid. 47. As such they align with what Internet governance
scholar Laura Denardis calls the Internet’s “underlying principle” (see
DeNardis 2014). 48. The results of the EC Working Group on Digital Library
Interoperability are reported in the briefing paper by Stephan Gradman
entitled “Interoperability: A Key Concept for Large Scale, Persistent Digital
Libraries” (Gradmann 2009). 49. “Semantic operability ensures that programmes
can exchange information, combine it with other information resources and
subsequently process it in a meaningful manner: _European Interoperability
Framework for pan-European eGovernment services_ , 2004,
. In the case of
Europeana, this could consist of the development of tools and technologies to
improve the automatic ingestion and interpretation of the metadata provided by
cultural institutions, for example, by mapping the names of artists so that an
artist known under several names is recognised as the same person.” (Council
Conclusions on the Role of Europeana for the Digital Access, Visibility and
Use of European Cultural Heritage,” European Council Conclusion, June 1, 2016,
.) 50.
Bowker, Baker, Millerand, and Ribes 2010. 51. Tsilas 2011, 103. 52. Borgman
2015, 46. 53. McDonough 2009. 54. Palfrey and Gasser 2012. 55. DeNardis 2011.
56. The .txtual Condition: Digital Humanities, Born-Digital Archives, and the
Future Literary; Palfrey and Gasser 2012; Matthew Kirschenbaum, “Distant
Mirrors and the Lamp,” talk at the 2013 MLA Presidential Forum Avenues of
Access session on “Digital Humanities and the Future of Scholarly
Communication.” 57. Ping-Huang 2016. 58. Lessig 2005 59. Ibid. 60. Ibid. 61.
Palfrey and Gasser 2012. 62. McPherson 2012, 29. 63. Berardi, Genosko, and
Thoburn 2011, 29–31. 64. For more on the nexus of freedom and control, see
Chun 2006. 65. The mere act of digitization of course inflicts mobility on an
object as digital objects are kept in a constant state of migration. 66. Krysa
2006. 67. See only the wealth of literature currently generated on the
“curatorial turn,” for example, O’Neill and Wilson 2010; and O’Neill and
Andreasen 2011. 68. Romeo and Blaser 2011. 69. Europeana Sound Connections,
collections-on-a-social-networking-platform.html>. 70. Ridge 2013. 71. Carolyn
Dinshaw has argued for the amateur’s ability in similar terms, focusing on her
potential to queer the archive (see Dinshaw 2012). 72. Stiegler 2003; Stiegler
n.d. The idea of the amateur as a subversive character precedes digitization,
of course. Think only of Roland Barthes’s idea of the amateur as a truly
subversive character that could lead to a break with existing ideologies in
disciplinary societies; see, for instance, Barthes’s celebration of the
amateur as a truly anti-bourgeois character (Barthes 1977 and Barthes 1981).
73. Not least in light of recent writings on the experience as even love
itself as a form of labor (see Weigel 2016). The constellation of love as a
form of labor has a long history (see Lewis 1987). 74. Raddick et al. 2009;
Proctor 2013. 75. “Many companies and institutions, that are successful
online, are good at supporting and harnessing people’s cognitive surplus. …
Users get the opportunity to contribute something useful and valuable while
having fun” (Sanderhoff, 33 and 36). 76. Mitropoulos 2012, 165. 77. Carpentier
2011. 78. EC Commission, “Europeana Website Overwhelmed on Its First Day by
Interest of Millions of Users,” MEMO/08/733, November 21, 2008,
. See also Stephen
Castle, “Europeana Goes Online and Is Then Overwhelmed,” _New York Times_ ,
November 21, 2008,
[nytimes.com/2008/11/22/technology/Internet/22digital.html](http://nytimes.com/2008/11/22/technology/Internet/22digital.html).
79. Information scholar affiliated with Europeana, interviewed by Nanna Bonde
Thylstrup, Brussels, Belgium, 2011. 80. See, for instance, Martina Powell,
“Bayern will mit ‘Mein Kampf’ nichts mehr zu tun haben,” _Die Zeit_ , December
13, 2013, soll-erscheinen>. Bavaria’s restrictive publishing policy of _Mein Kampf_
should most likely be interpreted as a case of preventive precaution on behalf
of the Bavarian State’s diplomatic reputation. Yet by transferring Hitler’s
author’s rights to the Bavarian Ministry, they allocated _Mein Kampf_ to an
existence in a gray area between private and public law. Since then, the book
has been the center of attention in a rift between, on the one hand, the
Ministry of Finance who has rigorously defended its position as the formal
rights holder, and, on the other hand, historians and intellectuals who,
supported the Bavarian science minister Wolfgang Heubisch, have argued that an
academic annotated version of _Mein Kampf_ should be made publicly accessible
in the name of Enlightenment. 81. Latour 2007. 82. Europeana’s more
traditional curatorial approach to mass digitization was criticized not only
by the media, but also others involved in mass digitization projects, who
claimed that Europeana had fundamentally misunderstood the point of mass
digitization. One engineer working on mass digitization projects is the
influential cultural software developer organization, IRI, argued that
Europeana’s production pattern was comparable to “launching satellites”
without thinking of the messages that are returned by the satellites. Google,
he argued, was differently attuned to the importance of feedback, because
“feedback is their business.” 83. In the most recent published report, Germany
contributes with about 15 percent and France with around 16 percent of the
total amount of available works. At the same time, Belgium and Slovenia only
count around 1 percent and Denmark along with Greece, Luxembourg, Portugal,
and a slew of other countries doesn’t even achieve representation in the pie
chart; see “Europeana Content Report,” August 6, 2015,
/europeana-dsi-ms7-content-report-august.pdf>. 84. Europeana information
scholar interview, 2011. 85. Ibid. 86. Wiebe de Jager, “MS15: Annual traffic
report and analysis,” Europeana, May 31 2014,
.

# 4
The Licit and Illicit Nature of Mass Digitization

## Introduction: Lurking in the Shadows

A friend has just recommended an academic book to you, and now you are dying
to read it. But you know that it is both expensive and hard to get your hands
on. You head down to your library to request the book, but you soon realize
that the wait list is enormous and that you will not be able to get your hands
on the book for a couple of weeks. Desperate, you turn to your friend for
help. She asks, “Why don’t you just go to a pirate library?” and provides you
with a link. A new world opens up. Twenty minutes later you have downloaded 30
books that you felt were indispensable to your bookshelf. You didn’t pay a
thing. You know what you did was illegal. Yet you also felt strangely
justified in your actions, not least spurred on by the enthusiastic words on
the shadow library’s front page, which sets forth a comforting moral compass.
You begin thinking to yourself: “Why are pirate libraries deemed more illegal
than Google’s controversial scanning project?” and “What are the moral
implications of my actions vis-à-vis the colonial framework that currently
dictates Europeana’s copyright policies?”

The existence of what this book terms shadow libraries raises difficult
questions, not only to your own moral compass but also to the field of mass
digitization. Political and popular discourses often reduce the complexity of
these questions to “right” and “wrong” and Hollywood narratives of pirates and
avengers. Yet, this chapter wishes to explore the deeper infrapolitical
implications of shadow libraries, setting out the argument that shadow
libraries offer us a productive framework for examining the highly complex
legal landscape of mass digitization. Rather than writing a chapter that
either supports or counters shadow libraries, the chapter seeks to chart the
complexity of the phenomenon and tease out its relevance for mass digitization
by framing it within what we might call an infrapolitics of parasitism.

In _The Parasite_ , a strange and fabulating book that brings together
information theory and cybernetics, physics, philosophy, economy, biology,
politics, and folk tales, French philosopher Michel Serres constructs an
argument about the conceptual figure of the parasite to explore the parasitic
nature of social relations. In a dizzying array of images and thought-
constructs, Serres argues against the idea of a balanced exchange of energy,
suggesting instead that our world is characterized by one parasite stealing
energy by feeding on another organism. For this purpose he reminds us of the
three meanings of parasite in the French language. In French, the term
parasite has three distinct, but related meanings. The first relates to one
organism feeding off another and giving nothing in return. Second, it refers
to the social concept of the freeloader, who lives off society without giving
anything in return. Both of these meanings are fairly familiar to most, and
lay the groundwork for our annoyance with both bugs and spongers. The third
meaning, however, is less known in most languages except French: here the
parasite is static noise or interference in a channel, interrupting the
seemingly balanced flow of things, mediating and thus transforming relations.
Indeed, for Serres, the parasite is itself a disruptive relation (rather than
entity). The parasite can also change positions of sender, receiver, and
noise, making it exceedingly difficult to discern parasite from nonparasite;
indeed, to such an extent that Serres himself exclaims “I no longer really
know how to say it: the parasite parasites the parasites.”1 Serres thus uses
his parasitic model to make a claim about the nature of cybernetic
technologies and the flow of information, arguing that “cybernetics gets more
and more complicated, makes a chain, then a network. Yet it is founded on the
theft of information, quite a simple thing.”2 The logic of the parasite,
Serres argues, is the logic of the interrupter, the “excluded third” or
“uninvited guest” who intercepts and confuses relations in a process of theft
that has a value both of destruction and a value of construction. The parasite
is thus a generative force, inventing, affecting, and transforming relations.
Hence, parasitism refers not only to an act of interference but also to an
interruption that “invents something new.”3

Michel Serres’s then-radical philosophy of the parasite is today echoed by a
broader recognition of the parasite as not only a dangerous entity, but also a
necessary mediator. Indeed, as Jeanette Samyn notes, we are today witnessing a
“pro-parasitic” movement in science in which “scientists have begun to
consider parasites and other pathogens not simply as problems but as integral
components of ecosystems.”4 In this new view, “… the parasite takes from its
host without ever taking its place; it creates new room, feeding off excess,
sometimes killing, but often strengthening its milieu.” In the following
sections, the lens of the parasite will help us explore the murky waters of
shadow libraries, not (only) as entities, but also as relational phenomena.
The point is to show how shadow libraries belong to the same infrapolitical
ecosystem as Google Books and Europeana, sometimes threatening them, but often
also strengthening them. Moreover, it seeks to show how visitors’ interactions
with shadow libraries are also marked by parasitical relations with Google,
which often mediates literature searches, thus entangling Google and shadow
libraries in a parasitical relationship where one feeds off the other and vice
versa.

Despite these entangled relations, the mass digitization strategies of shadow
libraries, Europeana, and Google Books differ significantly. Basically, we
might say that Google Books and Europeana each represent different strategies
for making material available on an industrial scale while maintaining claims
to legality. The sprawling and rapidly growing group of mass digitization
projects interchangeably termed shadow libraries represents a third set of
strategies. Shadow libraries5 share affinities with Europeana and Google Books
in the sense that they offer many of the same services: instant access to a
wealth of cultural works spanning journal articles, monographs, and textbooks
among others. Yet, while Google Books and Europeana promote visibility to
increase traffic, embed themselves in formal systems of communication, and
operate within the legal frameworks of public funding and private contracting,
shadow libraries in contrast operate in the shadows of formal visibility and
regulatory systems. Hence, while formal mass digitization projects such as
Google Books and Europeana publicly proclaim their desire to digitize the
world’s cultural memory, another layer of people, scattered across the globe
and belonging to very diverse environments, harbor the same aspirations, but
in much more subtle terms. Most of these people express an interest in the
written word, a moral conviction of free access, and a political view on
existing copyright regulations as unjust and/or untimely. Some also express
their fascination with the new wonders of technology and their new
infrastructural possibilities. Others merely wish to practice forms of access
that their finances, political regime, or geography otherwise prohibit them
from doing. And all of them are important nodes in a new shadowy
infrastructural system that provides free access worldwide to books and
articles on a scale that collectively far surpasses both Google and Europeana.

Because of their illicit nature, most analyses of shadowy libraries have
centered on their legal transgressions. Yet, their cultural trajectories
contain nuances that far exceed legal binaries. Approaching shadow libraries
through the lens of infrapolitics is helpful for bringing forth these much
more complex cultural mass digitization systems. This chapter explores three
examples of shadow libraries, focusing in particular on their stories of
origin, their cultural economies, and their sociotechnical infrastructures.
Not all shadow libraries fit perfectly into the category of mass digitization.
Some of them are smaller in size, more selective, and less industrial.
Nevertheless, I include them because their open access strategies allow for
unlimited downloads. Thus, shadow libraries, while perhaps selective in size
themselves, offer the opportunity to reproduce works at a massive and
distributed scale. As such, they are the perfect example of a mass
digitization assemblage.

The first case centers on lib.ru, an early Russia-based file-sharing platform
for exchanging books that today has grown into a massive and distributed file-
sharing project. It is primarily run by individuals, but it has also received
public funding, which shows that what at first glance appears as a simple case
of piracy simultaneously serves as a much more complex infrapolitical
structure. The second case, Monoskop, distinguishes itself by its boutique
approach to digitization. Monoskop too is characterized by its territorial
trajectory, rooted in Bratislava’s digital scene as an attempt to establish an
intellectual platform for the study of avant-garde (digital) cultures that
could connect its Bratislava-based creators to a global scene. Finally, the
chapter looks at UbuWeb, a shadow library dedicated to avant-garde cultural
works ranging from text and audio to images and film. Founded in 1996 as a US-
based noncommercial file-sharing site by poet Kenneth Goldsmith in response to
the marginal distribution of crucial avant-garde material, UbuWeb today offers
a wealth of avant-garde sound art, video, and textual works.

As the case studies show, shadow libraries have become significant mass
digitization infrastructures that offer the user free access to academic
articles and books, often by means of illegal file-sharing. They are informal
and unstable networks that rely on active user participation across a wide
spectrum, from deeply embedded people who have established file-sharing sites
to the everyday user occasionally sending the odd book or article to a friend
or colleague. As Lars Eckstein notes, most shadow libraries are characterized
not only by their informal character, but also by the speed with which they
operate, providing “a velocity of media content” which challenges legal
attacks and other forms of countermeasures.6 Moreover, shadow libraries also
often operate in a much more widely distributed fashion than both Europeana
and Google, distributing and mirroring content across multiple servers, and
distributing labor and responsibility in a system that is on the one hand more
robust, more redundant, and more resistant to any single point of failure or
control, and on the other hand more ephemeral, without a central point of
back-up. Indeed, some forms of shadow libraries exist entirely without a
center, instead operating infrastructurally along communication channels in
social media; for example, the use of the Twitter hashtag #ICanHazPDF to help
pirate scientific papers.

Today, shadow libraries exist as timely reminders of the infrapolitical nature
of mass digitization. They appear as hypertrophied versions of the access
provided by Google Books and Europeana. More fundamentally, they also exist as
political symptoms of the ideologies of the digital, characterized by ideals
of velocity and connectivity. As such, we might say that although shadow
libraries often position themselves as subversives, in many ways they also
belong to the same storyline as other mass digitization projects such as
Google Books and Europeana. Significantly, then, shadow libraries are
infrapolitical in two senses: first, they have become central infrastructural
elements in what James C. Scott calls the “infrapolitics of subordinate
groups,” providing everyday resistance by creating entrance points to
hitherto-excluded knowledge zones.7 Second, they represent and produce the
infrapolitics of the digital _tout court_ with their ideals of real-time,
globalized, and unhindered access.

## Lib.ru

Lib.ru is one of the earliest known digital shadow libraries. It was
established by the Russian computer science professor Maxim Moshkov, who
complemented his academic practice of programming with a personal hobby of
file-sharing on the so-called RuNet, the Russian-language segment of the
Internet.8 Moshkov’s collection had begun as an e-book swapping practice in
1990, but in 1994 he uploaded the material to his institute’s web server where
he then divided the site into several section such as “my hobbies,” “my work,”
and “my library.”9 If lib.ru began as a private project, however, the role of
Moshkov’s library soon changed as it quickly became Russia’s preferred shadow
library, with users playing an active role in its expansion by constantly
adding new digitized books. Users would continually scan and submit new texts,
while Moshkov, in his own words, worked as a “receptionist” receiving and
handling the material.10

Shadow libraries such as Moshkov’s were most likely born not only out of a
love of books, but also out of frustration with Russia’s lack of access to up-
to-date and affordable Western works.11 As they continued to grow and gain in
popularity, shadow libraries thus became not only points of access, but also
signs of infrastructural failure in the formal library system.12 After lib.ru
outgrew its initial server storage at Moshkov’s institute, Moshkov divided it
into smaller segments that were then distributed, leaving only the Russian
literary classics on the original site.13 Neighboring sites hosted other
genres, ranging from user-generated texts and fan fiction on a shadow site
called [samizdat.lib.ru](http://samizdat.lib.ru) to academic books in a shadow
library titled Kolkhoz, named after the commons-based agricultural cooperative
of the early Soviet era and curated and managed by “amateur librarians.”14 The
steadily accumulating numbers of added works, digital distributors, and online
access points expanded not only the range of the shadow collections, but also
their networked affordances. Lib.ru and its offshoots thus grew into an
influential node in the global mass digitization landscape, attracting both
political and legal attention.

### Lib.ru and the Law

Until 2004, lib.ru deployed a practice of handling copyright complaints by
simply removing works at the first request from the authors.15 But in 2004 the
library received its first significant copyright claim from the big Russian
publisher Kirill i Mefody (KM). KM requested that Moshkov remove access to a
long list of books, claiming exclusive Internet rights on the books, along
with works that were considered public domain. Moshkov refused to honor the
request, and a lawsuit ensued. The Ostankino Court of Moscow initially denied
the lawsuit because the contracts for exclusive Internet rights were
considered invalid. This did not deter KM, however, which then approached the
case from a different perspective, filing applications on behalf of well-known
Russian authors, including the crime author Alexandra Marinina and the science
fiction writer Eduard Gevorkyan. In the end, only Eduard Gevorkyan maintained
his claim, which was of the considerable size of one million rubles.16

During the trial, Moshkov’s library received widespread support from both
technologists and users of lib.ru, expressed, for example, in a manifesto
signed by the International Union of Internet Professionals, which among other
things touched upon the importance of online access not only to cultural works
but also to the Russian language and culture:

> Online libraries are an exceptionally large intellectual fund. They lessen
the effect of so-called “brain drain,” permitting people to stay in the orbit
of Russian language and culture. Without online libraries, the useful effect
of the Internet and computers in Russian education system is sharply lowered.
A huge, openly available mass of Russian literary texts is a foundation
permitting further development of Russian-language culture, worldwide.17

Emphasizing that Moshkov often had an agreement with the authors he put
online, the manifesto also called for a more stable model of online public
libraries, noting that “A wide list of authors who explicitly permitted
placing their works in the lib.ru library speaks volumes about the
practicality of the scheme used by Maxim Moshkov. However, the litigation
underway shows its incompleteness and weak spots.”18 Significantly, Moshkov’s
shadow library also received both moral and financial support from the state,
more specifically in the form of funding of one million rubles granted by the
Federal Agency for the Press and Mass Media. The funding came with the
following statement from the Agency’s chairman, Mikhail Seslavinsky:
“Following the lively discussion on how copyright could be protected in
electronic libraries, we have decided not to wait for a final decision and to
support the central library of RuNet—Maxim Moshkov’s site.”19 Seslavinsky’s
support not only reflected the public’s support of the digital library, but
also his own deep-seated interests as a self-confessed bibliophile, council
chair of the Russian organization National Union of Bibliophiles since 2011,
and author of numerous books on bibliology and bibliophilia. Additionally, the
support also reflected the issues at stake for the Russian legislative
framework on copyright. The framework had just passed a second reading of a
revised law “On Copyright and Related Rights” in the Russian parliament on
April 21, 2004, extending copyright from 50 years after an author’s death to
70 years, in accordance with international law and as a condition of Russia’s
entry into the World Trade Organization.20

The public funding, Moshkov stated, was spent on modernizing the technical
equipment for the shadow library, including upgrading servers and performing
OCR scanning on select texts.21 Yet, despite the widespread support, Moshkov
lost the copyright case to KM on May 31, 2005. The defeat was limited,
however. Indeed, one might even read the verdict as a symbolic victory for
Moshkov, as the court fined Moshkov only 30,000 rubles, a fragment of what KM
had originally sued for. The verdict did have significant consequences for how
Moshkov manages lib.ru, however. After the trial, Moshkov began extending his
classical literature section and stopped uploading books sent by readers into
his collection, unless they were from authors who submitted them because they
wished to publish in digital form.

What can we glean from the story of lib.ru about the infrapolitics of mass
digitization? First, the story of lib.ru illustrates the complex and
contingent historical trajectory of shadow libraries. Second, as the next
section shows, it offers us the possibility of approaching shadow libraries
from an infrastructural perspective, and exploring the infrapolitical
dimensions of shadow libraries in the area of tension between resistance and
standardization.

### The Infrapolitics of Lib.ru: Infrastructures of Culture and Dissent

While global in reach, lib.ru is first and foremost a profoundly
territorialized project. It was born out of a set of political, economic, and
aesthetic conditions specific to Russia and carries the characteristics of its
cultural trajectory. First, the private governance of lib.ru, initially
embodied by Moshkov, echoes the general development of the Internet in Russia
from 1991 to 1998, which was constructed mainly by private economic and
cultural initiatives at a time when the state was in a period of heavy
transition. Lib.ru’s minimalist programming style also made it a cultural
symbol of the early RuNet, acting as a marker of cultural identity for Russian
Internet users at home and abroad.22

The infrapolitics of lib.ru also carry the traits of the media politics of
Russia, which has historically been split into two: a political and visible
level of access to cultural works (through propaganda), and an infrapolitical
invisible level of contestation and resistance, enabling Russian media
consumers to act independently from official institutionalized media channels.
Indeed, some scholars tie the practice of shadow libraries to the Soviet
Union’s analog shadow activities, which are often termed _samizdat_ , that is,
illegal cultural distribution, including illegally listening to Western radio,
illegally trafficking Western music, and illegally watching Western films.23
Despite often circulating Western pop culture, the late-Soviet era samizdat
practices were often framed as noncapitalist practices of dissent without
profit motives.24 The dissent, however, was not necessarily explicitly
expressed. Lacking the defining fervor of a clear political ideology, and
offering no initiatives to overthrow the Soviet regime, samizdat was rather a
mode of dissent that evaded centralized ideological control. Indeed, as
Aleksei Yurchak notes, samizdat practices could even be read as a mode of
“suspending the political,” thus “avoiding the political concerns that had a
binary logic determined by the sovereign state” to demonstrate “to themselves
and to others that there were subjects, collectivities, forms of life, and
physical and symbolic spaces in the Soviet context that, without being overtly
oppositional or even political, exceeded that state’s abilities to define,
control, and understand them.”25 Yurchak thus reminds us that even though
samizdat was practiced as a form of nonpolitical practice, it nevertheless
inherently had significant political implications.

The infrapolitics of samizdat not only referred to a specific social practice
but were also, as Ann Komaromi reminds us, a particular discourse network
rooted in the technology of the typewriter: “Because so many people had their
own typewriters, the production of samizdat was more individual and typically
less linked to ideology and organized political structures. … The circulation
of Samizdat was more rhizomatic and spontaneous than the underground
press—samizdat was like mushroom ‘spores.’”26 The technopolitical
infrastructure of samizdat changed, however, with the fall of the Berlin Wall
in 1989, the further decentralization of the Russian media landscape, and the
emergence of digitization. Now, new nodes emerged in the Russian information
landscape, and there was no centralized authority to regulate them. Moreover,
the transmission of the Western capitalist system gave rise to new types of
shadow activity that produced items instead of just sharing items, adding a
new consumerist dimension to shadow libraries. Indeed, as Kuznetsov notes, the
late-Soviet samizdat created a dynamic textual space that aligned with more
general tendencies in mass digitization where users were “both readers and
librarians, in contrast to a traditional library with its order, selection,
and strict catalogisation.”27

If many of the new shadow libraries that emerged in the 1990s and 2000s were
inspired by the infrapolitics of samizdat, then, they also became embedded in
an infrastructural apparatus that was deeply nested within a market economy.
Indeed, new digital libraries emerged under such names as Aldebaran,
Fictionbook, Litportal, Bookz.ru, and Fanzin, which developed new platforms
for the distribution of electronic books under the label “Liters,” offering
texts to be read free of charge on a computer screen or downloaded at a
cost.28 In both cases, the authors receive a fee, either from the price of the
book or from the site’s advertising income. Accompanying these new commercial
initiatives, a concomitant movement rallied together in the form of Librusek,
a platform hosted on a server in Ecuador that offered its users the
possibility of uploading works on a distributed basis.29 In contrast to
Moshkov’s centralized control, then, the library’s operator Ilya Larin adhered
to the international piracy movement, calling his site a pirate library and
gracing Librusek’s website with a small animated pirate, complete with sabre
and parrot.

The integration and proliferation of samizdat practices into a complex
capitalist framework produced new global readings of the infrapolitics of
shadow libraries. Rather than reading shadow libraries as examples of late-
socialist infrapolitics, scholars also framed them as capitalist symptoms of
“market failure,” that is, the failure of the market to meet consumer
demands.30 One prominent example of such a reading was the influential Social
Science Research Council report edited by Joe Karaganis in 2006, titled “Media
Piracy in Emerging Economies,” which noted that cultural piracy appears most
notably as “a failure to provide affordable access to media in legal markets”
and concluded that within the context of developing countries “the pirate
market cannot be said to compete with legal sales or generate losses for
industry. At the low end of the socioeconomic ladder where such distribution
gaps are common, piracy often simply is the market.”31

In the Western world, Karaganis’s reading was a progressive response to the
otherwise traditional approach to media piracy as a legal failure, which
argued that tougher laws and increased enforcement are needed to stem
infringing activity. Yet, this book argues that Karaganis’s report, and the
approach it represents, also frames the infrapolitics of shadow libraries
within a consumerist framework that excises the noncommercial infrapolitics of
samizdat from the picture. The increasing integration of Russian media
infrapolitics into Western apparatuses, and the reframing of shadow libraries
from samizdat practices of political dissent to market failure, situates the
infrapolitics of shadow libraries within a consumerist dispositive and the
individual participants as consumers. As some critical voices suggest, this
has an impact on the political potential of shadow libraries because they—in
contrast to samizdat—actually correspond “perfectly to the industrial
production proper to the legal cultural market production.”32 Yet, as the
final section in this chapter shows, one also risks missing the rich nuances
of infrapolitics by conflating consumerist infrastructures with consumerist
practice.33

The political stakes of shadow libraries such as lib.ru illustrate the
difficulties in labeling shadow libraries in political terms, since they are
driven neither by pure globalized dissent nor by pure globalized and
commodified infrastructures. Rather, they straddle these binaries as
infrapolitical entities, the political dynamics of which align both with
standardization and dissent. Revisiting once more the theoretical debate, the
case of lib.ru shows that shadow libraries may certainly be global phenomena,
yet one should be careful with disregarding the specific cultural-political
trajectories that shape each individual shadow library. Lib.ru demonstrates
how the infrapolitics of shadow libraries emerge as infrastructural
expressions of the convergence between historical sovereign trajectories,
global information infrastructures, and public-private governance structures.
Shadow libraries are not just globalized projects that exist in parallel to
sovereign state structures and global economic flows. Instead, they are
entangled in territorial public-private governance practices that produce
their own late-sovereign infrapolitics, which, paradoxically, are embedded in
larger mass digitization problematics, both on their own territory and on the
global scene.

## Monoskop

In contrast to the broad and distributed infrastructure of lib.ru, other
shadow libraries have emerged as specialized platforms that cater to a
specific community and encourage a specific practice. Monoskop is one such
shadow library. Like lib.ru, Monoskop started as a one-man project and in many
respects still reflects its creator, Dušan Barok, who is an artist, writer,
and cultural activist involved in critical practices in the fields of
software, art, and theory. Prior to Monoskop, his activities were mainly
focused on the Bratislava cultural media scene, and Monoskop was among other
things set up as an infrastructural project, one that would not only offer
content but also function as a form of connectivity that could expand the
networked powers of the practices of which Barok was a part.34 In particular,
Barok was interested in researching the history of media art so that he could
frame the avant-garde media practices in which he engaged in Bratislava within
a wider historical context and thus lend them legitimacy.

### The Shadow Library as a Legal Stratagem

Monoskop was partly motivated by Barok’s own experiences of being barred from
works he deemed of significance to the field in which he was interested. As he
notes, the main impetus to start a blog “came from a friend who had access to
PDFs of books I wanted to read but could not afford go buy as they were not
available in public libraries.”35 Barok thus began to work on Monoskop with a
group of friends in Bratislava, initially hiding it from search engine bots to
create a form of invisibility that obfuscated its existence without, however,
preventing people from finding the Log and uploading new works. Information
about the Log was distributed through mailing lists on Internet culture, among
many other posts on e-book torrent trackers, DC++ networks, extensive
repositories such as LibGen and Aaaaarg, cloud directories, document-sharing
platforms such as Issuu and Scribd, and digital libraries such as the Internet
Archive and Project Gutenberg.36 The shadow library of Monoskop thus slowly
began to emerge, partly through Barok’s own efforts at navigating email lists
and downloading material, and partly through people approaching Monoskop
directly, sending it links to online or scanned material and even offering it
entire e-book libraries. Rather than posting these “donated” libraries in
their entirety, however, Barok and his colleagues edited the received
collection and materials so that they would fit Monoskop’s scope, and they
also kept scanning material themselves.

Today Monoskop hosts thematically curated collections of downloadable books on
art, culture, media studies, and other topics, partly in order to stimulate
“collaborative studies of the arts, media, and humanities.”37 Indeed, Monoskop
operates with a _boutique_ approach, offering relatively small collections of
personally selected publications to a steady following of loyal patrons who
regularly return to the site to explore new works. Its focal points are
summarized by its contents list, which is divided into three main categories:
“Avant-garde, modernism and after,” “Media culture,” and “Media, theory and
the humanities.” Within these three broad focal points, hundreds of links
direct the user to avant-garde magazines, art exhibitions and events, art and
design schools, artistic and cultural themes, and cultural theorists.
Importantly, shadow libraries such as Monoskop do not just host works
unbeknownst to the authors—authors also leak their own works. Thus, some
authors publishing with brand name, for-profit, all-rights-reserving, print-
on-paper-only publishing houses will also circulate a copy of their work on a
free text-sharing network such as Monoskop. 38

How might we understand Monoskop’s legal situation and maneuverings in
infrapolitical terms? Shadow libraries such as Monoskop draw their
infrapolitical strength not only from the content they offer but also from
their mode of engagement with the gray zones of new information
infrastructures. Indeed, the infrapolitics of shadow libraries such as
Monoskop can perhaps best be characterized as a stratagematic form of
infrapolitics. Monoskop neither inhabits the passive perspective of the
digital spectator nor deploys a form of tactics that aims to be failure free.
Rather, it exists as a body of informal practices and knowledges, as cunning
and dexterous networks that actively embed themselves in today’s
sociotechnical infrastructures. It operates with high sociotechnical
sensibilities, living off of the social relations that bring it into being and
stabilize it. Most significantly, Monoskop skillfully exploits the cracks in
the infrastructures it inhabits, interchangeably operating, evading, and
accompanying them. As Matthew Fuller and Andrew Goffey point out in their
meditation on stratagems in digital media, they do “not cohere into a system”
but rather operate as “extensive, open-ended listing[s]” that “display a
certain undecidability because inevitably a stratagem does not describe or
prescribe an action that is certain in its outcome.”39 Significantly, then,
failures and errors not only represent negative occurrences in stratagematic
approaches but also appeal to willful dissidents as potentially beneficial
tools. Dušan Barok’s response to a question about the legal challenges against
Monoskop evidences this stratagematic approach, as he replies that shadow
libraries such as Monoskop operate in the “gray zone,” which to him is also
the zone of fair use.40 Barok thus highlights the ways in which Monoskop
engages with established media infrastructures, not only on the level of
discursive conventions but also through their formal logics, technical
protocols, and social proprieties.

Thus, whereas Google lights up gray zones through spectacle and legal power
plays, and Europeana shuns gray zones in favor of the law, Monoskop literally
embraces its shadowy existence in the gray zones of the law. By working in the
shadows, Monoskop and likeminded operations highlight the ways in which the
objects they circulate (including the digital artifacts, their knowledge
management, and their software) can be manipulated and experimented upon to
produce new forms of power dynamics.41 Their ethics lie more in the ways in
which they operate as shadowy infrastructures than in intellectual reflections
upon the infrastructures they counter, without, however, creating an
opposition between thinking and doing. Indeed, as its history shows, Monoskop
grew out of a desire to create a space for critical reflection. The
infrapolitics of Monoskop is thus an infrapolitics of grayness that marks the
breakdown of clearly defined contrasts between legal and illegal, licit and
illicit, desire and control, instead providing a space for activities that are
ethically ambiguous and in which “everyone is sullied.”42

### Monoskop as a Territorializing Assemblage

While Monoskop’s stratagems play on the infrapolitics of the gray zones of
globalized digital networks, the shadow library also emerges as a late-
sovereign infrastructure. As already noted, Monoskop was from the outset
focused on surfacing and connecting art and media objects and theory from
Central and Eastern Europe. Often, this territorial dimension recedes into the
background, with discussions centering more on the site’s specialized catalog
and legal maneuvers. Yet Monoskop was initially launched partly as a response
to criticisms on new media scenes in the Slovak and Czech Republics as
“incomprehensible avant-garde.”43 It began as a simple invite-only instance of
wiki in August 2004, urging participants to collaboratively research the
history of media art. It was from the beginning conceived more as a
collaborative social practice and less as a material collection, and it
targeted noninstitutionalized researchers such as Barok himself.

As the nodes in Monoskop grew, its initial aim to research media art history
also expanded into looking at wider cultural practices. By 2010, it had grown
into a 100-gigabyte collection which was organized as a snowball research
collection, focusing in particular on “the white spots in history of art and
culture in East-Central Europe,” spanning “dozens of CDs, DVDs, publications,
as well as recordings of long interviews [Barok] did”44 with various people he
considered forerunners in the field of media arts. Indeed, Barok at first had
no plans to publish the collection of materials he had gathered over time. But
during his research stay in Rotterdam at the influential Piet Zwart Institute,
he met the digital scholars Aymeric Mansoux and Marcell Mars, who were both
active in avant-garde media practices, and they convinced him to upload the
collection.45 Due to the fragmentary character of his collection, Barok found
that Monoskop corresponded well with the pre-existing wiki, to which he began
connecting and embedding videos, audio clips, image files, and works. An
important motivating factor was the publication of material that was otherwise
unavailable online. In 2009, Barok launched Monoskop Log, together with his
colleague Tomáš Kovács. This site was envisioned as an affiliated online
repository of publications for Monoskop, or, as Barok terms it, “a free access
living archive of writings on art, culture, and media technologies.”46

Seeking to create situated spaces of reflection and to shed light on the
practices of media artists in Eastern and Central Europe, Monoskop thus
launched several projects devoted to excavating media art from a situated
perspective that takes its local history into account. Today, Monoskop remains
a rich source of information about artistic practices in Central and Eastern
Europe, Poland, Hungary, Slovakia, and the Czech Republic, relating it not
only to the art histories of the region, but also to its history of
cybernetics and computing.

Another early motivation for Monoskop was to provide a situated nodal point in
the globalized information infrastructures that emphasized the geographical
trajectories that had given rise to it. As Dušan Barok notes in an interview,
“For a Central European it is mind-boggling to realize that when meeting a
person from a neighboring country, what tends to connect us is not only
talking in English, but also referring to things in the far West. Not that the
West should feel foreign, but it is against intuition that an East-East
geographical proximity does not translate into a cultural one.”47 From this
perspective, Monoskop appears not only as an infrapolitical project of global
knowledge, but also one of situated sovereignty. Yet, even this territorial
focus holds a strategic dimension. As Barok notes, Monoskop’s ambition was not
only to gain new knowledge about media art in the region, but also to cash in
on the cultural capital into which this knowledge could potentially be
converted. Thus, its territorial matrix first and foremost translates into
Foucault’s famous dictum that “knowledge is power.” But it is nevertheless
also testament to the importance of including more complex spatial dynamics in
one’s analytical matrix of shadow libraries, if one wishes to understand them
as more than globalized breakers of code and arbiters of what Manuel Castells
once called the “space of flows.”48

## UbuWeb

If Monoskop is one of the most comprehensive shadow libraries to emerge from
critical-artistic practice, UbuWeb is one of the earliest ones and has served
as an inspirational example for Monoskop. UbuWeb is a website that offers an
encyclopedic scope of downloadable audio, video, and plain-text versions of
avant-garde art recordings, films, and books. Most of the books fall in the
category of small-edition artists’ books and are presented on the site with
permission from the artists in question, who are not so concerned with
potential loss of revenue since most of the works are officially out of print
and never made any money even when they were commercially available. At first
glance, UbuWeb’s aesthetics appear almost demonstratively spare. Still
formatted in HTML, it upholds a certain 1990s net aesthetics that has resisted
the revamps offered by the new century’s more dynamic infrastructures. Yet, a
closer look reveals that UbuWeb offers a wealth of content, ranging from high
art collections to much more rudimentary objects. Moreover, and more
fundamentally, its critical archival practice raises broader infrapolitical
questions of cultural hierarchies, infrastructures, and domination.

### Shadow Libraries between Gift Economies and Marginalized Forms of
Distribution

UbuWeb was founded by poet Kenneth Goldsmith in response to the marginal
distribution of crucial avant-garde material. It provides open access both to
out-of-print works that find a second life through digital art reprint and to
the work of contemporary artists. Upon its opening in 2001, Kenneth Goldsmith
termed UbuWeb’s economic infrastructure a “gift economy” and framed it as a
political statement that highlighted certain problems in the distribution of
and access to intellectual materials:

> Essentially a gift economy, poetry is the perfect space to practice utopian
politics. Freed from profit-making constraints or cumbersome fabrication
considerations, information can literally “be free”: on UbuWeb, we give it
away. … Totally independent from institutional support, UbuWeb is free from
academic bureaucracy and its attendant infighting, which often results in
compromised solutions; we have no one to please but ourselves. … UbuWeb posts
much of its content without permission; we rip full-length CDs into sound
files; we scan as many books as we can get our hands on; we post essays as
fast as we can OCR them. And not once have we been issued a cease and desist
order. Instead, we receive glowing emails from artists, publishers, and record
labels finding their work on UbuWeb, thanking us for taking an interest in
what they do; in fact, most times they offer UbuWeb additional materials. We
happily acquiesce and tell them that UbuWeb is an unlimited resource with
unlimited space for them to fill. It is in this way that the site has grown to
encompass hundreds of artists, thousands of files, and several gigabytes of
poetry.49

At the time of its launch, UbuWeb garnered extraordinary attention and divided
communities along lines of access and rights to historical and contemporary
artists’ media. It was in this range of responses to UbuWeb that one could
discern the formations of new infrastructural positions on digital archives,
how they should be made available, and to whom. Yet again, these legal
positions were accompanied by a territorial dynamic, including the impact of
regional differences in cultural policy on UbuWeb. Thus, as artist Jason Simon
notes, there were significant differences between the ways in which European
and North American distributors related to UbuWeb. These differences, Simon
points out, were rooted in “medium-specific questions about infrastructure,”
which differ “from the more interpretive discussion that accompanied video's
wholesale migration into fine art exhibition venues.”50 European pre-recession
public money thus permitted nonprofit distributors to embrace infrastructures
such as UbuWeb, while American distributors were much more hesitant toward
UbuWeb’s free-access model. When recession hit Europe in the late 2000s,
however, the European links to UbuWeb’s infrastructures crumbled while “the
legacy American distributors … have been steadily adapting.”51 The territorial
modulations in UbuWeb’s infrastructural set-up testify not only to how shadow
libraries such as UbuWeb are inherently always linked up to larger political
events in complex ways, but also to latent ephemerality of the entire project.

Goldsmith has more than once asserted that UbuWeb’s insistence on
“independent” infrastructures also means a volatile existence: “… by the time
you read this, UbuWeb may be gone. Cobbled together, operating on no money and
an all-volunteer staff, UbuWeb has become the unlikely definitive source for
all things avant-garde on the internet. Never meant to be a permanent archive,
Ubu could vanish for any number of reasons: our ISP pulls the plug, our
university support dries up, or we simply grow tired of it.” Goldsmith’s
emphasis on the ephemerality of UbuWeb is a shared condition of most shadow
libraries, most of which exist only as ghostly reminders with nonfunctional
download links or simply as 404 pages, once they pull the plug. Rather than
lamenting this volatile existence, however, Goldsmith embraces it as an
infrapolitical stance. As Cornelia Solfrank points out, UbuWeb was—and still
is—as much an “archival critical practice that highlights the legal and social
ramifications of its self-created distribution and archiving system as it is
about the content hosted on the site.”52 UbuWeb is thus not so much about
authenticity as it is about archival defiance, appropriation, and self-
reflection. Such broader and deeper understandings of archival theory and
practice allow us to conceive of it as the kind of infrapolitics that,
according to James C. Scott, “provides much of the cultural and structural
underpinning of the more visible political attention on which our attention
has generally been focused.”53 The infrapolitics of UbuWeb is devoted to
hatching new forms of organization, creating new enclaves of freedom in the
midst of orthodox ways of life, and inventing new structures of production and
dissemination that reveal not only the content of their material but also
their marginalized infrastructural conditions and the constellation of social
forces that lead to their online circulation.54

The infrapolitics of UbuWeb is testament not only to avant-garde cultures, but
also to what Hito Steyerl in her _Defense of Poor Images_ refers to as the
“neoliberal radicalization of the culture as commodity” and the “restructuring
of global media industries.” 55 These materials “circulate partly in the void
left by state organizations” that find it too difficult to maintain digital
distribution infrastructures and the art world’s commercial ecosystems, which
offer the cultural materials hosted on UbuWeb only a liminal existence. Thus,
while UbuWeb on the one hand “reveals the decline and marginalization of
certain cultural materials” whose production were often “considered a task of
the state,”56 on the other hand it shows how intellectual content is
increasingly privatized, not only in corporate terms but also through
individuals, which in UbuWeb’s case is expressed in Kenneth Goldsmith, who
acts as the sole archival gatekeeper.57

## The Infrapolitics of Shadow Libraries

If the complexity of shadow libraries cannot be reduced to the contrastive
codes of “right” and “wrong” and global-local binaries, the question remains
how to theorize the cultural politics of shadow libraries. This final section
outlines three central infrapolitical aspects of shadow libraries: access,
speed, and gift.

Mass digitization poses two important questions to knowledge infrastructures:
a logistical question of access and a strategic question of to whom to
allocate that access. Copyright poses a significant logistical barrier between
users and works as a point of control in the ideal free flow of information.
In mass digitization, increased access to information stimulates projects,
whereas in publishing industries with monopoly possibilities, the drive is
toward restriction and control. The uneasy fit between copyright regulations
and mass digitization projects has, as already shown, given rise to several
conflicts, either as legal battles or as copyright reform initiatives arguing
that current copyright frameworks cast doubt upon the political ideal of total
access. As with Europeana and Google Books, the question of _access_ often
stands at the core of the infrapolitics of shadow libraries. Yet, the
strategic responses to the problem of copyright vary significantly: if
Europeana moves within the established realm of legality to reform copyright
regulations and Google Books produces claims to new cultural-legal categories
such as “nonconsumptive reading,” shadow libraries offer a third
infrastructural maneuver—bypassing copyright infrastructures altogether
through practices of illicit file distribution.

Shadow libraries elicit a range of responses and discourses that place
themselves on a spectrum between condemnation and celebration. The most
straightforward response comes, unsurprisingly, from the publishing industry,
highlighting the fundamentally violent breaches of the legal order that
underpins the media industry. Such responses include legal action, policy
initiatives, and public campaigns against piracy, often staging—in more or
less explicit terms—the “pirate” as a common enemy of mankind, beyond legal
protection and to be fought by whatever means necessary.

The second response comes from the open source movement, represented among
others by the pro-reform copyright movement Creative Commons (CC), whose
flexible copyright framework has been adopted by both Europeana and Google
Books.58 While the open source movement has become a voice on behalf of the
telos of the Internet and its possibilities of offering free and unhindered
access, its response to shadow libraries has revealed the complex
infrapolitics of access as a postcolonial problematic. As Kavita Philip
argues, CC’s founder Lawrence Lessig maintains the image of the “good” Western
creative vis-à-vis the “bad” Asian pirate, citing for instance his statement
in his influential book _Free Culture_ that “All across the world, but
especially in Asia and Eastern Europe, there are businesses that do nothing
but take other people’s copyrighted content, copy it, and sell it. … This is
piracy plain and simple, … This piracy is wrong.” 59 Such statements, Kavita
Philip argues, frames the Asian pirate as external to order, whether it be the
order of Western law or neoliberalism.60

The postcolonial critique of CC’s Western normative discourse has instead
sought to conceptualize piracy, not as deviatory behavior in information
economies, but rather as an integral infrastructure endemic to globalized
information economies.61 This theoretical development offers valuable insights
for understanding the infrapolitics of shadow libraries. First of all, it
allows us to go beyond moral discussions of shadow libraries, and to pay
attention instead to the ways in which their infrastructures are built, how
they operate, and how they connect to other infrastructures. As Lawrence Liang
points out, if infrastructures traditionally belong to the domain of the
state, often in cooperation with private business, pirate infrastructures
operate in the gray zones of this set-up, in much the same way as slums exist
as shadow cities and copies are regarded as shadows of the original.62
Moreover, and relatedly, it reminds us of the inherently unstable form of
shadow libraries as a cultural construct, and the ways in which what gets
termed piracy differs across cultures. As Brian Larkin notes, piracy is best
seen as emerging from specific domains: dynamic localities with particular
legal, aesthetic, and social assemblages.63 In a final twist, research on
users of shadow libraries shows that usage of shadow libraries is distributed
globally. Multiple sources attest to the fact that most Sci-Hub usage occurs
outside the Anglosphere. According to Alexa Internet analytics, the top five
country sources of traffic to Sci-Hub were China, Iran, India, Brazil, and
Japan, which account for 56.4 percent of recent traffic. As of early 2016,
data released by Sci-Hub’s founder Alexandra Elbakyan also shows high usage in
developed countries, with a large proportion of the downloads coming from the
US and countries within the European Union.64 The same tendency is evident in
the #ICanHazPDF Twitter phenomenon, which while framed as “civil disobedience”
to aid users in the Global South65 nevertheless has higher numbers of posts
from the US and Great Britain.66

This brings us to the second cultural-political production, namely the
question of distribution. In their article “Book Piracy as Peer Preservation,”
Denis Tenen and Maxwell Henry Foxman note that rather than condemning book
piracy _tout court_ , established libraries could in fact learn from the
infrastructural set-ups of shadow libraries in relation to participatory
governance, technological innovation, and economic sustainability.67 Shadow
libraries are often premised upon an infrastructure that includes user
participation without, however, operating in an enclosed sphere. Often, shadow
libraries coordinate their actions by use of social media platforms and online
forums, including Twitter, Reddit, and Facebook, and the primary websites used
to host the shared files are AvaxHome, LibGen, and Sci-Hub. Commercial online
cloud storage accounts (such as Dropbox and Google Drive) and email are also
used to share content in informal ways. Users interested in obtaining an
article or book chapter will disseminate their request over one or more of the
platforms mentioned above. Other users of those platforms try to get the
requested content via their library accounts or employer-provided access, and
the actual files being exchanged are often hosted on other websites or emailed
to the requesting users. Through these networks, shadow libraries offer
convenient and speedy access to books and articles. Little empirical evidence
is available, but one study does indicate that a large number of shadow
library downloads are made because obtaining a PDF from a shadow library is
easier than using the legal access methods offered by a university’s
traditional channels of access, including formalized research libraries.68
Other studies indicate, however, that many downloads occur because the users
have (perceived) lack of full-text access to the desired texts.69

Finally, as indicated in the introduction to this chapter, shadow libraries
produce what we might call a cultural politics of parasitism. In the normative
model of shadow libraries, discourse often centers upon piracy as a theft
economy. Other discourses, drawing upon anthropological sources, have pointed
out that peer-to-peer file-sharing sites in reality organize around a gift
economy, that is, “a system of social solidarity based on a structured set of
gift exchange and social relationships among consumers.”70 This chapter,
however, ends with a third proposal: that shadow libraries produce a
parasitical form of infrapolitics. In _The Parasite_ , philosopher Michel
Serres speculates a way of thinking about relations of transfer—in social,
biological, and informational contexts—as fundamentally parasitic, that is, a
subtractive form of “taking without giving.” Serres contrasts the parasitic
model with established models of society based on notions such as exchange and
gift giving.71 Shadow libraries produce an infrapolitics that denies the
distinction between producers and subtractors of value, allowing us instead to
focus on the social roles infrastructural agents perform. Restoring a sense of
the wider context of parasitism to shadow libraries does not provide a clear-
cut solution as to when and where shadow libraries should be condemned and
when and where they should be tolerated. But it does help us ask questions in
a different way. And it certainly prevents the regarding of shadow libraries
as the “other” in the landscape of mass digitization. Shadow libraries
instigate new creative relations, the dynamics of which are infrastructurally
premised upon the medium they use. Just as typewriters were an important
component of samizdat practices in the Soviet Union, digital infrastructures
are central components of shadow libraries, and in many respects shadow
libraries bring to the fore the same cultural-political questions as other
forms of mass digitization: questions of territorial imaginaries,
infrastructures, regulation, speed, and ethics.

## Notes

1. Serres 1982, 55. 2. Serres 1982, 36. 3. Serres 1982, 36. 4. Samyn 2012. 5.
I stick with “shadow library,” a term that I first found in Lawrence Liang’s
(2012) writings on copyright and have since seen meaningfully unfolded in a
variety of contexts. Part of its strength is its sidestepping of the question
of the pirate and that term’s colonial connotations. 6. Eckstein and Schwarz
2014. 7. Scott 2009, 185–201. 8. See also Maxim Moshkov’s own website hosted
on lib.ru, . 9. Carey 2015. 10. Schmidt 2009. 11. Bodó
2016. “Libraries in the post-scarcity era.” As Balazs Bodó notes, the first
Russian mass-digitized shadow archives in Russia were run by professors from
the hard sciences, but the popularization of computers soon gave rise to much
more varied and widespread shadow library terrain, fueled by “enthusiastic
readers, book fans, and often authors, who spared no effort to make their
favorite books available on FIDOnet, a popular BBS system in Russia.” 12.
Stelmakh 2008, 4. 13. Bodó 2016. 14. Bodó 2016. 15. Vul 2003. 16. “In Defense
of Maxim Moshkov's Library,” n.d., The International Union of Internet
Professionals, . 17. Ibid. 18. Ibid. 19.
Schmidt 2009, 7. 20. Ibid. 21. Carey 2015. 22. Mjør 2009, 84. 23. Bodó 2015.
24. Kiriya 2012. 25. Yurchak 2008, 732. 26. Komaromi, 74. 27. Mjør, 85. 28.
Litres.ru, . 29. Library Genesis,
. 30. Kiriya 2012. 31. Karaganis 2011, 65, 426. 32.
Kiriya 2012, 458. 33. For a great analysis of the late-Soviet youth’s
relationship with consumerist products, read Yurchak’s careful study in
_Everything Was Forever, Until It Was No More: The Last Soviet Generation_
(2006). 34. “Dušan Barok: Interview,” _Neural_ 44 (2010), 10. 35. Ibid. 36.
Ibid. 37. Monoskop,” last modified March 28, 2018, Monoskop.
. . 38. “Dušan
Barok: Interview,” _Neural_ 44 (2010), 10. 39. Fuller and Goffey 2012, 21. 40.
“Dušan Barok: Interview,” _Neural_ 44 (2010), 11. 41. In an interview, Dušan
Barok mentions his inspirations, including early examples such as textz.com, a
shadow library created by the Berlin-based artist Sebastian Lütgert. Textz.com
was one of the first websites to facilitate free access to books on culture,
politics, and media theory in the form of text files. Often the format would
itself toy with legal limits. Thus, Lütgert declared in a mischievous manner
that the website would offer a text in various formats during a legal debacle
with Surhkamp Verlag: “Today, we are proud to announce the release of
walser.php (), a 10,000-line php script
that is able to generate the plain ascii version of ‘Death of a Critic.’ The
script can be redistributed and modified (and, of course, linked to) under the
terms of the GNU General Public License, but may not be run without written
permission by Suhrkamp Verlag. Of course, reverse-engineering the writings of
senile German revisionists is not the core business of textz.com, so
walser.php includes makewalser.php, a utility that can produce an unlimited
number of similar (both free as in speech and free as in copy) php scripts for
any digital text”; see “Suhrkamp recalls walser.pdf, textz.com releases
walser.php,” Rolux.org,
.
42. Fuller and Goffey 2012, 11. 43. “MONOSKOP Project Finished,” COL-ME Co-
located Media Expedition, [www.col-me.info/node/841](http://www.col-
me.info/node/841). 44. “Dušan Barok: Interview,” _Neural_ 44 (2010), 10. 45.
Aymeric Mansoux is a senior lecturer at the Piet Zwart Institute whose
research deals with the defining, constraining, and confining of cultural
freedom in the context of network-based practices. Marcel Mars is an advocate
of free software and a researcher who is also active in a shadow library named
_Public Library,_ (also interchangeably
known as Memory of the World). 46. “Dušan Barok,” Memory of the World,
. 47. “Dušan Barok: Interview,”
_Neural_ 44 (2010), 10. 48. Castells 1996. 49. Kenneth Goldsmith,”UbuWeb Wants
to Be Free” (last modified July 18, 2007),
. 50. Jacob King and
Jason Simon, “Before and After UbuWeb: A Conversation about Artists’ Film and
Video Distribution,” _Rhizome_ , February 20, 2014.
artists-film-and-vid>. 51. King and Simon 2014. 52. Sollfrank 2015. 53. Scott
1990, 184. 54. For this, I am indebted to Hito Steyerl’s essay ”In Defense of
the Poor Image,” in her book _The Wretched of the Screen_ , 31–59. 55. Steyerl
2012, 36. 56. Steyerl 2012, 39. 57. Sollfrank 2015. 58. Other significant open
source movements include Free Software Foundation, the Wikimedia Foundation,
and several open access initiatives in science. 59. Lessig 2005, 57. 60.
Philip 2005, 212. 61. See, for instance, Larkin 2008; Castells and Cardoso
2012; Fredriksson and Arvanitakis 2014; Burkart 2014; and Eckstein and Schwarz
2014. 62. Liang 2009. 63. Larkin 2008. 64. John Bohannon, “Who’s Downloading
Pirated Papers? Everyone,” _Science Magazine_ , April 28, 2016,
everyone>. 65. “The Scientists Encouraging Online Piracy with a Secret
Codeword,” _BBC Trending_ , October 21, 2015, trending-34572462>. 66. Liu 2013. 67. Tenen and Foxman 2014. 68. See Kramer
2016. 69. Gardner and Gardner 2017. 70. Giesler 2006, 283. 71. Serres 2013, 8.

# III
Diagnosing Mass Digitization

# 5
Lost in Mass Digitization

## The Desire and Despair of Large-Scale Collections

In 1995, founding editor of _Wired_ magazine Kevin Kelly mused upon how a
digital library would look:

> Two decades ago nonlibrarians discovered Borges’s Library in silicon
circuits of human manufacture. The poetic can imagine the countless rows of
hexagons and hallways stacked up in the Library corresponding to the
incomprehensible micro labyrinth of crystalline wires and gates stamped into a
silicon computer chip. A computer chip, blessed by the proper incantation of
software, creates Borges’s Library on command. … Pages from the books appear
on the screen one after another without delay. To search Borges’s Library of
all possible books, past, present, and future, one needs only to sit down (the
modern solution) and click the mouse.1

At the time of Kelly’s writing, book digitization on a massive scale had not
yet taken place. Building his chimerical dream around Jorge Luis Borges’s own
famous magic piece of speculation regarding the Library of Babel, Kelly not
only dreamed up a fantasy of what a digital library might be in an imaginary
dialogue with Borges; he also argued that Jorge Luis Borges’s vision had
already taken place, by grace of nonlibrarians, or—more
specifically—programmers. Specifically, Kelly mentions Karl Sims, a computer
scientist working on a supercomputer called Connection Machine 5 (you may
remember it from the set of _Jurassic Park_ ), who had created a simulated
version of Borges’s library.2

Twenty years after Kelly’s vision, a whole host of mass digitization projects
have sought more or less explicitly to fulfill Kelly’s vision. Incidentally,
Brewster Kahle, one of the lead engineers of the aforementioned Connection
Machine, has become a key figure in the field. Kahle has long dreamed of
creating a universal digital library, and has worked to fulfill it in
practical terms through the nonprofit Internet Archive project, which he
founded in 1996 with the stated mission of creating “universal access to all
knowledge.” In an op-ed in 2017, Kahle lamented the recent lack of progress in
mass digitization and argued for the need to create a new vision for mass
digitization, stating, “The Internet Archive, working with library partners,
proposes bringing millions of books online, through purchase or digitization,
starting with the books most widely held and used in libraries and
classrooms.”3 Reminding us that three major entities have “already digitized
modern materials at scale: Google, Amazon, and the Internet Archive, probably
in that order of magnitude,”4 Kahle nevertheless notes that “bringing
universal access to books” has not yet been achieved because of a fractured
field that diverges on questions of money, technology, and legal clarity. Yet,
outlining his new vision for how a sustainable mass digitization project could
be achieved, Kahle remains convinced that mass digitization is both a
necessity and a possibility.

While Brewster Kahle, Kevin Kelly, Google, Amazon, Europeana’s member
institutions, and others disagree on how to achieve mass digitization, for
whom, and in what form, they are all united in their quest for digitization on
a massive scale. Many shadow libraries operate with the same quantitative
statements, proudly asserting the quantities of their massive holdings on the
front page.

Given the fractured field of mass digitization, and the lack of economic
models for how to actually make mass digitization sustainable, why does the
common dream of mass digitization persist? As this chapter shows, the desire
for quantity, which drives mass digitization, is—much like the Borges stories
to which Kelly also refers—laced with ambivalence. On the one hand, the
quantitative aspirations are driven forth by the basic assumption that “more
is more”: more data and more cultural memory equal better industrial and
intellectual progress. One the other hand, the sheer scale of ambition also
causes frustration, anxiety, and failed plans.

The sense that sheer size and big numbers hold the promise of progress and
greatness is nothing new, of course. And mass digitization brings together
three fields that have each historically grown out of scalar ambitions:
collecting practices, statistics, and industrialization processes.
Historically, as cultural theorist Couze Venn reminds us, most large
collections bear the imprint of processes of (cultural) colonization, human
desires, and dynamics of domination and superiority. We therefore find in
large collections the “impulses and yearnings that have conditioned the
assembling of most of the collections that today establish a monument to past
efforts to gather together knowledge of the world and its treasury of objects
and deeds.”5 The field of statistics, moreover, so vital to the evolution of
modern governance models, is also premised upon the accumulation of ever-more
information.6 And finally, we all recognize the signs of modern
industrialization processes as they appear in the form of globalization,
standardization, and acceleration. Indeed, as French sociologist Henri
Lefebvre once argued (with a nod to Marx), the history of modern society could
plainly and simply be seen as the history of accumulation: of space, of
capital, of property.7

In mass digitization, we hear the political echoes of these histories. From
Jeanneney’s war cry to defend European patrimonies in the face of Google’s
cultural colonization to Google’s megalomaniac numbers game and Europeana’s
territorial maneuverings, scale is used as a point of reference not only to
describe the space of cultural objects in themselves but also to outline a
realm of cultural command.

A central feature in the history of accumulation and scale is the development
of digital technology and the accompanying new modes of information
organization. But even before then, the invention of new technologies offered
not only new modes of producing and gathering information and new
possibilities of organizing information assemblages, but also new questions
about the implications of these leaps in information production. As historians
Ann Blair and Peter Stallybrass show, “infolust,” that is, the cultural
attitude that values expansive collections for long-term storage, emerged in
the early Renaissance period.8 In that period, new print technology gave rise
to a new culture of accumulating and stockpiling notes and papers, even
without having a specific compositional purpose in mind. Within this scholarly
paradigm, new teleologies were formed that emphasized the latent value of any
piece of information, expressed for instance by Joachim Jungius’s exclamation
that “no field was too remote, no author too obscure that it would not yield
some knowledge or other” and Gabriel Naudé’s observation that there is “no
book, however bad or decried, which will not be sought after by someone over
time.”9 The idea that any piece of information was latently valuable was later
remarked upon by Melvin Dewey, who noted at the beginning of the twentieth
century that a “normal librarian’s instinct is to keep every book and
pamphlet. He knows that possibly some day, somebody wants it.”10

Today, mass digitization repeats similar concerns. It reworks the old dream of
an all-encompassing and universal library and has foregrounded once again
questions about what to save and what to let go. What, one might ask, would
belong in such a library? One important field of interest is the question of
whether, and how, to preserve metadata—today’s marginalia. Is it sufficient to
digitize cultural works, or should all accompanying information about the
provenance of the work also be included? And how can we agree upon what
marginalia actually is across different disciplines? Mass digitization
projects in natural history rarely digitize marginalia such as logs and
written accounts, focusing only on what to that discipline is the main object
at hand, for example, a piece of rock, a fly specimen, a pressed plant. Yet,
in the history of science, logs are an invaluable source of information about
how the collected object ended up in the collection, the meaning it had to the
collector, and the place it takes in the collection.11 In this way, new
questions with old trajectories arise: What is important for understanding a
collection and its life? What should be included and excluded? And how will we
know what will turn out to be important in the future?

In the era of big data, the imperative is often to digitize and “save all.”
Prestige mass digitization projects such as Google Books and Europeana have
thus often contextualized their importance in terms of scale. Indeed, as we
saw in the previous chapters, the question of scale has been a central point
of political contestation used to signal infrastructural power. Thus the hype
around Google Books, as well as the political ire it drew, centered on the
scale of the project just as quantitative goals are used in Europeana to
signal progress and significance. Inherent in these quantitative claims are
not only ideas about political power, but also the widespread belief in
digital circles—and the political regimes that take inspiration from them—that
the more information the user is able to access, the more empowered the user
is to navigate and make meaning on their own. In recent years, the imaginaries
of freedom of navigation have also been adjoined by fantasies of freedom of
infrastructural construction through the image of the platform. Mass
digitization projects should therefore not only offer the user the potential
to navigate collections freely, but also to build new products and services on
top of them.12 Yet, as this chapter argues, the ethos of potentially unlimited
expansion also prompts a new set of infrapolitical questions about agency and
control. While these questions are inherently related to the larger questions
of territory and power explored in the previous chapters, they occur on a
different register, closer to the individual user and within the spatialized
imaginaries of digital information.

As many critics have noted, the logic of expansion and scale, and the
accompanying fantasies of the empowered user, often builds on neoliberal
subjectification processes. While highly seductive, they often fail to take
into account the reality of social complexity. Therefore, as Lisa Nakamura
notes, the discourse of complete freedom of navigation through technological
liberation—expressed aptly in Microsoft’s famous slogan “Where do you want to
go today?”—assumes, wrongly, that everyone is at liberty to move about
unhindered.13 And the fantasy of empowerment through platforming is often also
shot through with neoliberal ideals that not only fail to take into account
the complex infrapolitical realities of social interaction, but also rely on
an entrepreneurial epistemology that evokes “a flat, two-dimensional stage on
which resources are laid out for users to do stuff with” and which we are not
“inclined to look underneath or behind it, or to question its structure.”14

This chapter unfolds these central infrapolitical problematics of the spatial
imaginaries of knowledge in relation to a set of prevalent cultural spatial
tropes that have gained new life in digital theory and that have informed the
construction and development of mass digitization projects: the flaneur, the
labyrinth, and the platform. Cultural reports, policy papers, and digital
design strategies often use these three tropes to elicit images of pleasure
and playfulness in mass digitization projects; yet, as the following sections
show, they also raise significant questions of control and agency, not least
against the backdrop of ever-increasing scales of information production.

## Too Much—Never Enough

The question of scale in mass digitization is often posed as a rational quest
for knowledge accumulation and interoperability. Yet this section argues that
digitized collections are more than just rational projects; they strike deep
affective cords of desire, domination, and anxiety. As Couze Venn reminds us,
collections harbor an intimate connection between cognition and affective
economy. In this connection, the rationalized drive to collect is often
accompanied by a slippage, from a rationalized urge to a pathological drive
ultimately associated with desire, power, domination, anxiety, nostalgia,
excess, and—sometimes even—compulsion and repetition.15 The practice of
collecting objects thus not only signals a rational need but often also
springs from desire, and as psychoanalysis has taught us, a sense of lack is
the reflection of desire. As Slavoj Zizek puts it, “desire’s _raison d’être_
is not to realize its goal, to find full satisfaction, but to reproduce itself
as desire.” 16 Therefore, no matter how much we collect, the collector will
rarely experience their collection as complete and will often be haunted by
the desire to collect more.

In addition to the frightening (yet titillating) aspect of never having our
desires satisfied, large collections also give rise to a set of information
pathologies that, while different in kind, share an understanding of
information as intimidation. The experience is generally induced by two
inherently linked factors. First, the size of the cultural collection has
historically also often implied a powerful collector with the means to gather
expensive materials from all over the world, and a large collection has thus
had the basic function of impressing and, if need be, intimidating people.
Second, large collections give rise to the sheer subjective experience of
being overwhelmed by information and a mental incapacity to take it all in.
Both factors point to questions of potency and importance. And both work to
instill a fear in the visitor. As Voltaire once noted, “a great library has
the quality of frightening those who look upon it.”17

The intimidating nature of large collections has been a favored trope in
cultural representations. The most famous example of a gargantuan, even
insanity-inducing, library is of course Jorge Luis Borges’s tale of the
Library of Babel, the universal totality of which becomes both a monstrosity
in the characters’ lives and a source of hope, depending on their willingness
to make peace and submit themselves to the library’s infinite scale and
Kafkaesque organization.18 But Borges’s nonfiction piece from 1939, _The Total
Library,_ also serves as an elegant tale of an informational nightmare. _The
Total Library_ begins by noting that the dream of the utopia of the total
library “has certain characteristics that are easily confused with virtues”
and ends with a more somber caution: “One of the habits of the mind is the
invention of horrible imaginings. … I have tried to rescue from oblivion a
subaltern horror: the vast, contradictory Library, whose vertical wildernesses
of books run the incessant risk of changing into others that affirm, deny, and
confuse everything like a delirious god.” 19

Few escape the intimidating nature of large collections. But while attention
has often been given to the citizen subjected to the disciplining force of the
sovereign state in the form of its institutions, less attention has been given
to those that have had to structure and make sense of these intimidating
collections. Until recently, cultural collections were usually oriented toward
the figure of the patron or, in more abstract geographical terms, (God-given)
patrimony. Renaissance cabinets of curiosities were meant to astonish and
dazzle; the ostentatious wealth of the Baroque museums of the seventeenth and
eighteenth centuries displayed demonstrations of Godly power; and bourgeois
museums of the nineteenth century positioned themselves as national
institutions of _Bildung_. But while cultural memory institutions have worked
first and foremost to mirror to an external audience the power and the psyche
of their owners in individual, religious, and/or geographical terms, they have
also consistently had to grapple internally with the problem of how to best
organize and display these collections.

One of the key generators of anxiety in vast libraries has been the question
of infrastructure. Each new information paradigm and each new technology has
induced new anxieties about how best to organize information. The fear of
disorder haunted both institutions and individuals. In his illustrious account
of Ephraim Chamber’s _Cyclopaedia_ (the forerunner of Denis Diderot’s and Jean
le Rond d’Alembert’s famous Enlightenment project, the _Encyclopédie_ ),
Richard Yeo thus recounts how Gottfried Leibniz complained in 1680 about “that
horrible mass of books which keeps on growing” so that eventually “the
disorder will become nearly insurmountable.”20 Five years on, the French
scholar and critic Adrien Baillet warned his readers, “We have reason to fear
that the multitude of books which grows every day in a prodigious fashion will
make the following centuries fall into a state as barbarous as that of the
centuries that followed the fall of the Roman Empire.”21 And centuries later,
in the wake of the typewriter, the annual report of the Secretary of the
Smithsonian Institution in Washington, DC, drew attention to the
infrastructural problem of organizing the information that was now made
available through the typewriter, noting that “about twenty thousand volumes …
purporting to be additions to the sum of human knowledge, are published
annually; and unless this mass be properly arranged, and the means furnished
by which its contents may be ascertained, literature and science will be
overwhelmed by their own unwieldy bulk.”22 The experience of feeling
overwhelmed by information and lacking the right tools to handle it is no
joke. Indeed, a number of German librarians actually went documentably insane
between 1803 and 1825 in the wake of the information glut that followed the
secularization of ecclesiastical libraries.23 The desire for grand collections
has thus always also been followed by an accompanying anxiety relating to
questions of infrastructure.

As the history of collecting pathologies shows, reducing mass digitization
projects to rational and technical information projects would deprive them of
their rich psychological dimensions. Instead of discounting these pathologies,
we should acknowledge them, and examine not only their nature, but also their
implications for the organization of mass digitization projects. As the
following section shows, the pathologies not only exist as psychological
forces, but also as infrastructural imaginaries that directly impact theories
on how best to organize information in mass digitization. If the scale of mass
digitization projects is potentially limitless, how should they be organized?
And how will we feel when moving about in their gargantuan archives?

## The Ambivalent flaneur

In an article on cultures of archiving, sociologist Mike Featherstone asked
whether “the expansion of culture available at our fingertips” could be
“subjected to a meaningful ordering,” or whether the very “desire to remedy
fragmentation” should be “seen as clinging to a form of humanism with its
emphasis upon cultivation of the persona and unity which are now regarded as
merely nostalgic.”24 Featherstone raised the question in response to the
popularization of the Internet at the turn of the millennium. Yet, as the
previous section has shown, his question is probably as old as the collecting
practices themselves. Such questions have become no less significant with mass
digitization. How are organizational practices conceived of as meaningful
today? As we shall see, this question not only relates to technical
characteristics but is also informed by a strong spatial imaginary that often
takes the shape of labyrinthine infrastructures and often orients itself
toward the figure of the user. Indeed, the role of the organizer of knowledge,
and therefore the accompanying responsibility of making sense of collections,
has been conferred from knowledge professionals to individuals.

Today, as seen in all the examples of mass digitization we have explored in
the previous chapters, cultural memory institutions face a different paradigm
than that of the eighteenth- and nineteenth-century disciplining cultural
memory institution. In an age that encourages individualism, democratic
ideals, and cultural participation, the orientations of the cultural memory
institutions have shifted in discourse, practice, or both, toward an emphasis
on the importance of the subjective experience and active participation of the
individual visitor. As part of this shift, and as a result of the increasing
integration of the digital imaginary and production apparatus into the field
of cultural memory, the visitor has thus metamorphosed from a disciplinary
subject to a prosumer, produser, participant, and/or user.

The organizational shift in the cultural memory ecosystem means that
visionaries and builders of mass digitization infrastructures now pay
attention not only to how collections may reflect upon the institution that
holds the collection, but also on how the user experiences the informational
navigation of collections. This is not to say that making an impression, or
even disciplining the user, is not a concern for many mass digitization
projects. Mass digitizations’ constant public claims to literal greatness
through numbers evidence this. Yet, today’s projects also have to contend with
the opinion of the public and must make their projects palatable and
consumable rather than elitist and intimidating. The concern of the builders
of mass digitization infrastructure is therefore not only to create an
internal logic to their collections, but also to maximize the user’s
experience of being offered a wealth of information, while mitigating the
danger of giving the visitor a sense of losing oneself, or even drowning, in
information. An important question for builders of mass digitization projects
has therefore been how to build visual and semantic infrastructures that offer
the user a sense of meaningful direction as well as a desire to keep browsing.

While digital collections are in principle no longer tethered to their
physical origins in spatial terms, we still encounter ideas about them in
spatialized terms, often using notions such as trails, paths, and alleyways to
visualize the spaces of digital collections.25 This form of spatialized logic
did not emerge with the mass digitization of cultural heritage collections,
however, but also resides at the heart of some of the most influential early
digital theories on the digital realm.26 These theorized and conceptualized
the web as a new form of architectural infrastructure, not only in material
terms (such as cables and servers) but also as a new experiential space.27 And
in this spatialized logic, the figure of the flaneur became a central
character. Thus, we saw in the 1990s the rise of a digital interpretation of
the flaneur, originally an emblematic figure of modern urban culture at the
turn of the twentieth century, in the form of the virtual flaneur or the
cyberflaneur. In 1994, German net artists Heiko Idensen and Matthias Krohn
paid homage to the urban figure, noting in a text that “the screen winks at
the flaneur” and locating the central tenets of computer culture with the
“intoxication of the flânerie. Screens as streets and homes … of the crowd?”28
Later, artist Steven Goldate provided a simple equation between online and
offline spaces, noting among other things that “What the city and the street
was to the flaneur, the Internet and the Superhighway have become to the
Cyberflaneur.”29

Scholars, too, explored the potentials and limits of thinking about the user
of the Internet in flaneurian terms. Thus, Mike Featherstone drew parallels
between the nineteenth-century flaneur and the virtual flaneur, exploring the
similarities and differences between navigational strategies, affects, and
agencies in the early urban metropolis and the emergent digital realm of the
1990s.30

Although the discourse on the digital flaneur was most prevalent in the 1990s,
it still lingers on in contemporary writings about digitized cultural heritage
collections and their design. A much-cited article by computer scientists
Marian Dörk, Sheelagh Carpendale, and Carey Williamson, for instance, notes
the striking similarity between the “growing cities of the 19th century and
today’s information spaces” and the relationship between “the individual and
the whole.”31 Dörk, Carpendale, and Williamson use the figure of the flaneur
to emphasize the importance of supporting not only utilitarian information
needs through grand systems but also leisurely information surfing behaviors
on an individual level. Dörk, Carpendale, and Willliamson’s reflections relate
to the experience of moving about in a mass of information and ways of making
sense of this information. What does it mean to make sense of mass
digitization? How can we say or know that the past two hours we spent
rummaging about in the archives of Google Books, digging deeper in Europeana,
or following hyperlinks in Monoskop made sense, and by whose standards? And
what are the cultural implications of using the flaneur as a cultural
reference point for these ideals? We find few answers to these questions in
Dörk, Carpendale, and Williamson’s article, or in related articles that invoke
the flaneur as a figure of inspiration for new search strategies. Thus, the
figure of the flaneur is predominantly used to express the pleasurable and
productive aspect of archival navigation. But in its emphasis on pleasure and
leisure, the figure neglects the much more ambivalent atmosphere that
enshrouds the flaneur as he navigates the modern metropolis. Nor does it
problematize the privileged viewpoint of the flaneur.

The character of the flaneur, both in its original instantiations in French
literature and in Walter Benjamin’s early twentieth-century writings, was
certainly driven by pleasure; yet, on a more fundamental level, his existence
was also, as Elizabeth Wilson points out in her feminist reading of the
flaneur, “a sorrowful engagement with the melancholy of cities,” which arose
“partly from the enormous, unfulfilled promise of the urban spectacle, the
consumption, the lure of pleasure and joy which somehow seem destined to be
disappointed.”32 Far from an optimistic and unproblematic engagement with
information, then, the figure of the flaneur also evokes deeper anxieties
arising from commodification processes and the accompanying melancholic
realization that no matter how much one strolls and scrolls, nothing one
encounters can ever fully satisfy one’s desires. Benjamin even strikingly
spatializes (and sexualizes) this mental state in an infrastructural
imaginary: the labyrinth. The labyrinth is thus, Benjamin suggests, “the home
of the hesitant. The path of someone shy of arrival at a goal easily takes the
form of a labyrinth. This is the way of the (sexual) drive in those episodes
which precede its satisfaction.”33

Benjamin’s hesitant flaneur caught in an unending maze of desire stands in
contrast to the uncomplicated flaneur invoked in celebratory theories on the
digital flaneur. Yet, recent literature on the design of digital realms
suggests that the hesitant man caught in a drive for more information is a
much more accurate image of the digital flaneur than the man-in-the-know.34
Perhaps, then, the allegorical figure of the flaneur in digital design should
be used less to address pleasurable wandering and more to invoke “the most
characteristic response of all to the wholly new forms of life that seemed to
be developing: ambivalence.”35 Caught up in the commodified labyrinth of the
modern digitized archive, the digital flaneur of mass digitization might just
as easily get stuck in a repetitive, monotonous routine of scrolling and
downloading new things, forever suspended in a state of unfulfilled desire,
than move about in meaningful and pleasurable ways.36

Moreover, and just as importantly, the figure of the flaneur is also entangled
in a cultural matrix of assumptions about gender, capabilities, and colonial
implications. In short: the flaneur is a white, able-bodied male. As feminist
theory attests to, the concept of the flaneur is male by definition. Some
feminists such as Griselda Pollock and Janet Wolff have denied the possibility
of a female variant altogether, because of women’s status as (often absent)
objects rather than subjects in the nineteenth-century urban environment.37
Others, such as Elizabeth Wilson, Deborah Epstein Nord, and Mica Nava have
complicated the issue by alluding the opportunities and limitations of
thinking about a female variant of the flaneur, for instance a flâneuse.38
These discussions have also reverberated in the digital sphere in new
variations.39 Whatever position one assumes, it is clear that the concept of
the flaneur, even in its female variant, is a complicated figure that has
problematic allusions to a universal privileged figure.

In similar terms, the flaneur also has problematic colonial and racial
connotations. As James Smalls points out in his essay “'Race As Spectacle in
Late-Nineteenth-Century French Art and Popular Culture,” the racial dimension
of the flaneur is “conspicuously absent” from most critical engagements with
the concept.40 Yet, as Smalls notes, the question of race is crucial, since
“the black man … is not privileged to lose himself in the Parisian crowd, for
he is constantly reminded of his epidermalized existence, reflected back at
him not only by what he sees, but by what we see as the assumed ‘normal’
white, universal spectator.”41 This othering is, moreover, not limited to the
historical scene of nineteenth-century Paris, but still remains relevant
today. Thus, as Garnette Cadogan notes in his essay “Walking While Black,”
non-white people are offered none of the freedoms of blending into the crowd
that Baudelaire’s and Benjamin’s flaneurs enjoyed. “Walking while black
restricts the experience of walking, renders inaccessible the classic Romantic
experience of walking alone. It forces me to be in constant relationship with
others, unable to join the New York flaneurs I had read about and hoped to
join.”42

Lastly, the classic figure of the flaneur also assumes a body with no
disabilities. As Marian Ryan notes in an essay in the _New York Times_ , “The
art of flânerie entails blending into the crowd. The disabled flaneur can’t
achieve that kind of invisibility.”43 What might we take from these critical
interventions into the uncomplicated discourse of the flaneur? Importantly,
they counterbalance the dominant seductive image of the empowered user, and
remind us of the colonial male gaze inherent in any invocation of the metaphor
of the flaneur, which for the majority of users is a subject position that is
simply not available (nor perhaps desirable).

The limitations of the figure of the flaneur raise questions not only about
the metaphor itself, but also about the topography of knowledge production it
invokes. As already noted, Walter Benjamin placed the flaneur within a larger
labyrinthine topology of knowledge production, where the flaneur could read
the spectacle in front of him without being read himself. Walter Benjamin
himself put the flaneur to rest with an analysis of an Edgar Allen Poe story,
where he analyzed the demise of the flaneur in an increasingly capitalist
topography, noting in melancholy terms that, “The bazaar is the last hangout
of the flaneur. If in the beginning the street had become an interieur for
him, now this interieur turned into a street, and he roamed through the
labyrinth of merchandise as he had once roamed through the labyrinth of the
city. It is a magnificent touch in Poe’s story that it includes along with the
earliest description of the flaneur the figuration of his end.”44 In 2012,
Evgeny Morozov in similar terms declared the death of the cyberflaneur.
Linking the commodification of urban spaces in nineteenth-century Paris to the
commodification of the Internet, Morozov noted that “it’s no longer a place
for strolling—it’s a place for getting things done” and that “Everything that
makes cyberflânerie possible—solitude and individuality, anonymity and
opacity, mystery and ambivalence, curiosity and risk-taking—is under
assault.”45 These two death sentences, separated by a century, link the
environment of the flaneur to significant questions about the commodification
of space and its infrapolitical implications.

Exploring the implications of this topography, the following section suggests,
will help us understand the infrapolitics of the spatial imaginaries of mass
digitization, not only in relation to questions of globalization and late
sovereignty, but also to cultural imaginaries of knowledge infrastructures.
Indeed, these two dimensions are far from mutually exclusive, but rather
belong to the same overarching tale of the politics of mass digitization.
Thus, while the material spatial infrastructures of mass digitization projects
may help us appreciate certain important political dynamics of Europeana,
Google Books, and shadow libraries (such as their territorializing features or
copyright contestations in relation to knowledge production), only an
inclusion of the infrastructural imaginaries of knowledge production will help
us understand the complex politics of mass digitization as it metamorphoses
from analog buildings, shelves, and cabinets to the circulatory networks of
digital platforms.

## Labyrinthine Imaginaries: Infrastructural Perspectives of Power and
Knowledge Production

If the flaneur is a central early figure in the cultural imaginary of the
observer of cultural texts, the labyrinth has long served as a cultural
imaginary of the library, and, in larger terms, the spatialized
infrastructural conditions of knowledge and power. Thus, literature is rife
with works that draw on libraries and labyrinths to convey stories about
knowledge production and the power struggles hereof. Think only of the elderly
monk-librarian in Umberto Eco’s classic, _The Name of the Rose,_ who notes
that: “the library is a great labyrinth, sign of the labyrinth of the world.
You enter and you do not know whether you will come out” 46; or consider the
haunting images of being lost in Jose Luis Borges’s tales about labyrinthine
libraries.47 This section therefore turns to the infrastructural space of the
labyrinth, to show that this spatial imaginary, much like the flaneur, is
loaded with cultural ambivalence, and to explore the ways in which the
labyrinthine infrastructural imaginary emphasizes and crystallizes the
infrapolitical tension in mass digitization projects between power and
perspective, agency and environment, playful innovation and digital labor.

The labyrinth is a prevalent literary trope, found in authors from Ovid,
Virgil, and Dante to Dickens and Nietzsche, and it has been used particularly
in relation to issues of knowledge and agency, and in haunting and nightmarish
terms in modern literature.48 As the previous section indicates, the labyrinth
also provides a significant image for understanding our relationship to mass
digitization projects as sites of both knowledge production and experience.
Indeed, one shadow library is even named _Aleph_ , which refers to the ancient
Hebrew letter and likely also nods at Jose Luis Borges’s labyrinthine short
story, _Aleph,_ on infinite labyrinthine architectures. Yet, what kind of
infrastructure is a labyrinth, and how does it relate to the potentials and
perils of mass digitization?

In her rich historical study of labyrinths, Penelope Doob argues that the
labyrinth possesses a dual potentiality: on the one hand, if experienced from
within, the labyrinth is a sign of confusion; on the other, when viewed from
above, it is a sign of complex order.49 As Harold Bloom notes, “all of us have
had the experience of admiring a structure when outside it, but becoming
unhappy within it.”50 Envisioning the labyrinth from within links to a
claustrophobic sense of ignorance, while also implying the possibility of
progress if you just turn the next corner. What better way to describe one’s
experience in the labyrinthine infrastructures of mass digitization projects
such as Google Books with its infrastructural conditions and contexts of
experience and agency? On the one hand, Google Books appears to provide the
view from above, lending itself as a logistical aid in its information-rich
environment. On the other hand, Google Books also produces an alienating
effect of impenetrability on two levels. First, although Google presents
itself as a compass, its seemingly infinite and constantly rearranging
universe nevertheless creates a sense of vertigo, only reinforced by the
almost existential question “Do you feel lucky?” Second, Google Books also
feels impenetrable on a deeper level, with its black-boxed governing and
ordering principles, hidden behind complex layers of code, corporate cultures,
and nondisclosure agreements.51 But even less-commercial mass digitization
projects such as, for instance, Europeana and Monoskop can produce a sense of
claustrophobia and alienation in the user. Think only of the frustration
encountered when reaching dead ends in the form of broken links or in lack of
access set down by European copyright regulations. Or even the alienation and
dissatisfaction that can well up when there are seemingly no other limits to
knowledge, such as in Monoskop, than one’s own cognitive shortcomings.

The figure of the labyrinth also serves as a reminder that informational
strolling is not only a leisurely experience, but also a laborious process.
Penelope Doob thus points out the common medieval spelling of labyrinth as
_laborintus_ , which foregrounds the concept of labor and “difficult process,”
whether frustrating, useful, or both.52 In an age in which “labor itself is
now play, just as play becomes more and more laborious,”53 Doob’s etymological
excursion serves to highlight the fact that in many mass digitization projects
it is indeed the user’s leisurely information scrolling that in the end
generates profit, cultural value, and budgetary justification for mass
digitization platforms. Jose van Dijck’s analysis of the valuation of traffic
in a digital environment is a timely reminder of how traffic is valued in a
cultural memory environment that increasingly orients itself toward social
media, “Even though communicative traffic on social media platforms seems
determined by social values such as popularity, attention, and connectivity,
they are impalpably translated into monetary values and redressed in business
models made possible by digital technology.”54 This is visible, for instance,
in Europeana’s usage statistic reports, which links the notions of _traffic_
and _performance_ together in an ontological equation (in this equation poor
performance inevitably means a mark of death). 55 In a blogpost marking the
launch of the _Europeana Statistics Dashboard_ , we are told that information
about mass digitization traffic is “vital information for a modern cultural
institution for both reporting and planning purposes and for public
accountability.”56 Thus, although visitors may feel solitary in their digital
wanderings, their digital footsteps are in fact obsessively traced and tracked
by mass digitization platforms and often also by numerous third parties.

Today, then, the user is indeed at work as she makes her way in the
labyrinthine infrastructures of mass digitization by scrolling, clicking,
downloading, connecting, and clearing and creating new paths. And while
“search” has become a keyword in digital knowledge environments, digital
infrastructures in mass digitization projects in fact distract as much as they
orient. This new economy of cultural memory begs the question: if mass
digitization projects, as labyrinthine infrastructures, invariably disorient
the wanderer as much as they aid her, how might we understand their
infrapolitics? After all, as the previous chapters have shown, mass
digitization projects often present a wide array of motivations for why
digitization should happen on a massive scale, with knowledge production and
cultural enlightenment usually featuring as the strongest arguments. But as
the spatialized heuristics of the flaneur and the labyrinth show, knowledge
production and navigation is anything but a simple concept. Rather, the
political dimensions of mass digitization discussed in previous chapters—such
as standardization, late sovereignty, and network power—are tied up with the
spatial imaginaries of what knowledge production and cultural memory are and
how they should and could be organized and navigated.

The question of the spatial imaginaries of knowledge production and
imagination has a long philosophic history. As historian David Bates notes,
knowledge in the Enlightenment era was often imagined as a labyrinthine
journey. A classic illustration of how this journey was imagined is provided
by Enlightenment philosopher Jean-Louis Castilhon, whose frustration is
palpable in this exclamation: “How cruel and painful is the situation of a
Traveller who has imprudently wandered into a forest where he knows neither
the winding paths, nor the detours, nor the exits!”57 These Enlightenment
journeys were premised upon an infrastructural framework that linked error and
knowledge, but also upon an experience of knowledge quests riddled by loss of
oversight and lack of a compass. As the previous sections show, the labyrinth
as a form of knowledge production in relation to truth and error persists as
an infrastructural trope in the digital. Yet, it has also metamorphosed
significantly since Castilhon. The labyrinthine infrastructural imaginaries we
find in digital environments thus differ significantly from more classical
images, not least under the influence of the rhizomatic metaphors of
labyrinths developed by Deleuze and Guattari and Eco. If the labyrinth of the
Renaissance had an endpoint and a truth, these new labyrinthine
infrastructures, as Kristin Veel points out, had a much more complex
relationship to the spatial organization of the truth. Eco and Deleuze and
Guattari thus conceived of their labyrinths as networks “in which all points
can be connected with one another” with “no center” but “an almost unlimited
multiplicity of alternative paths,” which makes it “impossible to rise above
the structure and observe it from the outside, because it transcends the
graphic two-dimensionality of the two earlier forms of labyrinths.”58 Deleuze
expressed the senselessness of these contemporary labyrinths as a “theater
where nothing is fixed, a labyrinth without a thread (Ariadne has hung
herself).”59

In mass digitization, this new infrastructural imaginary feeds a looming
concern over how best to curate and infrastructurate cultural collections. It
is this concern that we see at play in the aforementioned institutional
concerns over how to best create meaningful paths in the cultural collections.
The main question that resounds is: where should the paths lead if there is no
longer one truth, that is, if the labyrinth has no center? Some mass
digitization projects seem to revel in this new reality. As we have seen,
shadow libraries such as Monoskop and UbuWeb use the affordances of the
digital to create new cultural connections outside of the formal hierarchies
of cultural memory institutions. Yet, while embraced by some, predictably the
new distribution of authority generates anxiety in the cultural memory circles
that had hitherto been able to hold claim to knowledge organization expertise.
This is the dizzying perspective that haunts the cultural memory professionals
faced with Europeana’s data governance model. Thus, as one Europeana
professional explained to me in 2010, “Europeana aims at an open-linked-data
model with a number of implications. One implication is that there will be no
control of data usage, which makes it possible, for instance, to link classics
with porn. Libraries do not agree to this loss of control which was at the
base of their self-understanding.”60 The Europeana professional then proceeded
to recount the profound anxiety experienced and expressed by knowledge
professionals as they increasingly came face-to-face with a curatorial reality
that is radically changing what counts as knowledge and context, where a
search for Courbet could, in theory, not only lead the user to other French
masters of painting but also to a copy of a porn magazine (provided it is out
of copyright). The anxiety experienced by knowledge professionals in the new
cultural memory ecosystem can of course be explained by a rationalized fear of
job insecurity and territorial concerns. Yet, the fear of knowledge
infrastructures without a center may also run deeper. As Penelope Doob reminds
us, the center of the labyrinth historically played a central moral and
epistemological role in the labyrinthine topos, as the site that held the
epiphanous key to unravel whatever evils or secrets the labyrinth contained.
With no center, there is no key, no epiphany.61 From this perspective, then,
it is not only a job that is lost. It is also the meaning of knowledge
itself.62

What, then, can we take from these labyrinthine wanderings as we pursue a
greater understanding of the infrapolitics of mass digitization? Certainly, as
this section shows, the politics of mass digitization is entangled in
spatialized imaginaries that have a long and complex cultural and affective
trajectory interlinked with ontological and epistemological questions about
the very nature of knowledge. Cladding the walls of these trajectories are, of
course, the ever-present political questions of authority and territory, but
also deeper cultural and affective questions about the nature and meaning of
knowledge as it bandies about in our cultural imaginaries, between discoveries
and dead-ends, between freedom and control.

As the next section will show, one concept has in particular come to
encapsulate these concerns: the notion of serendipity. While the notion of
serendipity has a long history, it has gained new relevance with mass
digitization, where it is used to express the realm of possibilities opened up
by the new digital infrastructures of knowledge production. As such, it has
come to play a role, not only as a playful cultural imaginary, but also as an
architectural ideal in software developments for mass digitization. In the
following section, we will look at a few examples of these architectures, as
well as the knowledge politics they are entangled in.

## The Architecture of Serendipitous Platforms

Serendipity has for long been a cherished word in archival studies, used to
describe a magical moment of “Eureka!” A fickle and fabulating concept, it
belongs to the world of discovery, capturing the moment when a meandering
soul, a flaneur, accidentally stumbles upon a valuable find. As such, the
moment of serendipity is almost always a happy circumstance of chance, and
never an unfortunate moment of risk. Serendipity also embodies the word in its
own origins. This section outlines the origins of this word and situate its
reemergence in theories on libraries and on digital realms of knowledge
production.

The English aristocrat Horace Walpole coined the word serendipity in a letter
to Horace Mann in 1754, in which he explained his fascination with a Persian
fairy tale about three princes from the _Isle of Serendip_ _63_ who possess
superpowers of observation. In his letter, Walpole linked the contents of the
fantastical story to his view of how new discoveries are made: “As their
highnesses travelled, they were always making discoveries, by “accidental
sagacity,” of things which they were not in quest of.” 64 And he proposed a
new word—“serendipity”—to describe this sublime talent for discovery.

Walpole’s conceptual invention did not immediately catch fire in common
parlance.65 But a few centuries after its invention, it suddenly took hold.
Who awakened the notion from its dormant state, and why? Sociologists Robert
K. Merton and Elinor Barber provided one influential answer in their own
enjoyable exploration of the word. As they note, serendipity had a particular
playful tone to it, expressing a sense that knowledge comes about not only
through sheer willpower and discipline, but also via pleasurable chance. This
almost hedonistic dimension made it incompatible with the serious ethos of the
nineteenth century. As Merton and Barber note, “The serious early Victorians
were not likely to pick up serendipity, except perhaps to point to it as a
piece of frivolous whimsy. … Although the Victorians, and especially Victorian
scientists, were familiar with the part played by accident in the process of
discovery, they were likely neither to highlight that factor nor to clothe the
phenomenon of accidental discovery in so lighthearted a word as
serendipity.”66 But in the 1940s and 1950s something happened—the word began
to catch on. Merton and Barber link this turn of linguistic events not only to
pure chance, but also a change in scientific networks and paradigms. Traveling
from the world of letters, as they recount, the word began making its way into
scientific circles, where attention was increasingly turned to “splashy
discoveries in lab and field.”67 But as Lorraine Daston notes, “discoveries,
especially those made by serendipity, depend partly on luck, and scientists
schooled in probability theory are loathe to ascribe personal merit to the
merely lucky,” and scientists therefore increasingly began to “domesticate
serendipity.”68 Daston remarks that while scientists schooled in probability
were reluctant to ascribe their discoveries to pure chance, the “historians
and literary scholars who struck serendipitous gold in the archives did not
seem so eager to make a science out of their good fortune.”69 One tale of how
literary and historical scholars struck serendipitous gold in the archive is
provided by Mike Featherstone:

> Once in the archive, finding the right material which can be made to speak
may itself be subject to a high degree of contingency—the process not of
deliberate rational searching, but serendipity. In this context it is
interesting to note the methods of innovatory historians such as Norbert Elias
and Michel Foucault, who used the British and French national libraries in
highly unorthodox ways by reading seemingly haphazardly “on the diagonal,”
across the whole range of arts and sciences, centuries and civilizations, so
that the unusual juxtapositions they arrived at summoned up new lines of
thought and possibilities to radically re-think and reclassify received
wisdom. Here we think of the flaneur who wanders the archival textual city in
a half-dreamlike state in order to be open to the half-formed possibilities of
the material and sensitive to unusual juxtapositions and novel perceptions.70

English scholar Nancy Schultz in similar terms notes that the archive “in the
humanities” represents a “prime site for serendipitous discovery.”71 In most
of these cases, serendipity is taken to mean some form of archival insight,
and often even a critical intellectual process. Deb Verhoeven, Associate Dean
of Engagement and Innovation at the University of Technology Sydney, reminds
us in relation to feminist archival work that “stories of accidental
discovery” can even take on dimensions of feminist solace, consoling “the
researcher, and us, with the idea that no system, whatever its claims to
discipline, comprehensiveness, and structure, is exempt from randomness, flux,
overflow, and therefore potential collapse.”72

But with mass digitization processes, their fusion of probability theories and
archives, and their ideals of combined fun and fact-finding, the questions
raised in the hard sciences about serendipity, its connotations of freedom and
chance, engineering and control, now also haunt the archives of historians and
literary scholars. Serendipity has now often come to be used as a motivating
factor for digitization in the first place, based on arguments that mass
digitized archives allow not only for dedicated and target-oriented research,
but also for new modes of search, of reading haphazardly “on the diagonal”
across genres and disciplines, as well as across institutional and national
borders that hitherto kept works and insights apart. As one spokesperson from
a prominent mass digitization company states, “digital collections have been
designed both to assist researchers in accessing original primary source
materials and to enable them to make serendipitous discoveries and unexpected
connections between sources.”73 And indeed, this sentiment reverberates in all
mass digitization projects from Europeana and Google Books to smaller shadow
libraries such as UbuWeb and Monoskop. Some scholars even argue that
serendipity takes on new forms due to digitization.74

It seems only natural, then, that mass digitization projects, and their
actors, have actively adopted the discourse of serendipity, both as a selling
point and a strategic claim. Talking about Google’s digitization program, Dr.
Sarah Thomas, Bodley’s Librarian and Director of Oxford University Library
Services, notes: “Library users have always loved browsing books for the
serendipitous discoveries they provide. Digital books offer a similar thrill,
but on multiple levels—deep entry into the texts or the ability to browse the
virtual shelf of books assembled from the world's great libraries.”75 But it
has also raised questions for those people who are in charge, not only of
holding serendipity forth as an ideal, but also building the architecture to
facilitate it. Dan Cohen, speaking on behalf of the DPLA, thus noted the
centrality of the concept, but also the challenges that mass digitization
raised in practical terms: “At DPLA, we’ve been thinking a lot about what’s
involved with serendipitous discovery. Since we started from scratch and
didn’t need to create a standard online library catalog experience, we were
free to experiment and provide novel ways into our collection of over five
million items. How to arrange a collection of that scale so that different
users can bump into items of unexpected interest to them?” While adopting the
language of serendipity is easy, its infrastructural construction is much
harder to envision. This challenge clearly troubles the strategic team
developing Europeana’s infrastructure, as it notes in a programmatic tone that
stands hilariously at odds with the curiosity it must cater to:

> Reviewing the personas developed for the D6.2 Requirements for Europeana.eu8
deliverable—and in particular those of the “culture vultures”—one finds two
somewhat-opposed requirements. On the one hand, they need to be able to find
what they are looking for, and navigate through clear and well-structured
data. On the other hand, they also come to Europeana looking for
“inspiration”—that is to say, for something new and unexpected that points
them towards possibilities they had previously been unaware of; what, in the
formal literature of user experience and search design, is sometimes referred
to as “serendipity search.” Europeana’s users need the platform to be
structured and predictable—but not entirely so.76

To achieve serendipity, mass digitization projects have often sought to take
advantage of the labyrinthine infrastructures of digitization, relying not
only on their own virtual bookshelves, but also on the algorithmic highways
and back alleys of social media. Twitter, in particular, before it adopted
personalization methods, became a preferred infrastructure for mass
digitization projects, who took advantage of Twitter’s lack of personalized
search to create whimsical bots that injected randomness into the user’s feed.
One example was the Digital Public Library of America’s DPLA Bot, which grabs
a random noun and uses its API to share the first result it finds. The DPLA
Bot aims to “infuse what we all love about libraries—serendipitous
discovery—into the DPLA” and thus seeks to provide a “kind of ‘Surprise me!’
search function for DPLA.”77 It did not take the programmer Peter Meyr much
time to develop a similar bot for Europeana. In an interview with
EuropeanaPro, Peter Meyr directly related the EuropeanaBot to the
serendipitous affordances of Twitter and its rewards for mass digitization
projects, noting that:

> The presentation of digital resources is difficult for libraries. It is no
longer possible to just explore, browse the stacks and make serendipitous
findings. With Europeana, you don't even have a physical library to go to. So
I was interested in bringing a little bit of serendipity back by using a
Twitter bot. … If I just wanted to present (semi)random Europeana findings, I
wouldn’t have needed Twitter—an RSS-Feed or a web page would be enough.
However, I wanted to infuse EuropeanaBot with a little bit of “Twitter
culture” and give it a personality.78

The British Library also developed a Twitter bot titled the Mechanical
Curator, which posts random resources with no customization except a special
focus on images in the library’s seventeenth- to nineteenth-century
collections.79 But there were also many projects that existed outside social
media platforms and operated across mass digitization projects. One example
was the “serendipity engine,” Serendip-o-matic, which first examined the
user’s research interests and then, based on this data, identified “related
content in locations such as the Digital Public Library of America (DPLA),
Europeana, and Flickr Commons.”80 While this initiative was not endorsed by
any of these mass digitization projects, they nevertheless featured it on
their blogs, integrating it into the mass digitization ecosystem.

Yet, while mass digitization for some represents the opportunity to amplify
the chance of chance, other scholars increasingly wonder whether the
engineering processes of mass digitization would take serendipity out of the
archive. Indeed, to them, the digital is antithetical to chance. One such
viewpoint is uttered by historian Tristram Hunt in an op-ed charging against
Google’s British digitization program under the title, “Online is fine, but
history is best hands on.” In it, Hunt argues that the digital, rather than
providing a new means of chance finding, would impede historical discovery and
that only the analog archival environment could foster real historical
discoveries, since it is “… only with MS in hand that the real meaning of the
text becomes apparent: its rhythms and cadences, the relationship of image to
word, the passion of the argument or cold logic of the case. Then there is the
serendipity, the scholar’s eternal hope that something will catch his eye,”81
In similar terms, Graeme Davison describes the lacking of serendipitous
errings in digital archives, as he likens digital search engines with driving
“a high-powered car down a freeway, compared with walking or cycling. It gets
us there more quickly but we skirt the towns and miss a lot of interesting
scenery on the way.”82 William McKeen also links the loss of serendipity to
the acceleration of method in the digital:

> Think about the library. Do people browse anymore? We have become such a
directed people. We can target what we want, thanks to the Internet. Put a
couple of key words into a search engine and you find—with an irritating hit
or miss here and there—exactly what you’re looking for. It’s efficient, but
dull. You miss the time-consuming but enriching act of looking through
shelves, of pulling down a book because the title interests you, or the
binding. Inside, the book might be a loser, a waste of the effort and calories
it took to remove it from its place and then return. Or it might be a dark
chest of wonders, a life-changing first step into another world, something to
lead your life down a path you didn't know was there.83

Common to all these statements is the sentiment that the engineering of
serendipity removes the very chance of serendipity. As Nicholas Carr notes,
“Once you create an engine—a machine—to produce serendipity, you destroy the
essence of serendipity. It becomes something expected rather than
unexpected.”84 It appears, then, that computational methods have introduced
historians and literary scholars to the same “beaverish efforts”85 to
domesticate serendipity as the hard sciences had to face at the beginning of
the twentieth century.

To my knowledge, few systematic studies exist about whether mass digitization
projects such as Europeana and Google Books hamper or foster creative and
original research in empirical terms. How one would go about such a study is
also an open question. The dichotomy between digital and analog does seem a
bit contrived, however. As Dan Cohen notes in a blogpost for DPLA, “bookstores
and libraries have their own forms of ‘serendipity engineering,’ from
storefront staff picks to behind-the-scenes cataloguing and shelving methods
that make for happy accidents.”86 Yet there is no doubt that the discourse of
serendipity has been infused with new life that sometimes veers toward a
“spectacle of serendipity.”87

Over the past decade, the digital infrastructures that organize our cultural
memory have become increasingly integrated in a digital economy that valuates
“experience” as a cultural currency that can be exchanged to profit, and our
affective meanderings as a form of industrial production. This digital economy
affects the architecture and infrastructure of digital archives. The archival
discourse on digital serendipity is thus now embroiled in a more deep-seated
infrapolitics of workspace architecture, influenced by Silicon Valley’s
obsession with networks, process, and connectivity.88 Think only of the
increasing importance of Google and Facebook to mass digitization projects:
most of these projects have a Facebook page on which they showcase their
material, just as they take pains to make themselves “algorithmically
recognizable”89 to Google and other search engines in the hope of reaching an
audience beyond the echo chamber of archives and to distribute their archival
material on leisurely tidbit platforms such as Pinterest and Twitter.90 If
serendipity is increasingly thought of as a platform problem, the final
question we might pose is what kind of infrapolitics this platform economy
generates and how it affects mass digitization projects.

## The Infrapolitics of Platform Power

As the previous sections show, mass digitization projects rely upon spatial
metaphors to convey ideas about, and ideals of, cultural memory
infrastructures, their knowledge production, and their serendipitous
potential. Thus, for mass digitization projects, the ideal scenario is that
the labyrinthine errings of the user result in serendipitous finds that in
turn bring about new forms of cultural value. From the point of the user,
however, being caught up in the labyrinth might just as easily give rise to an
experience of being confronted with a sense of lack of oversight and
alienation in the alleyways of commodified infrastructures. These two
scenarios co-exist because of what Penelope Doob (as noted in the section on
labyrinthine imaginaries) refers to as the dual potentiality of the labyrinth,
which when experienced from within can be become a sign of confusion, and when
viewed from above becomes a sign of complex order.91

In this final section, I will turn to a new spatial metaphor, which appears to
have resolved this dual potentiality of the spatial perspective of mass
digitization projects: the platform. The platform has recently emerged as a
new buzzword in the digital economy, connoting simultaneously a perspective, a
business strategy, and a political ideology. Ideally the platform provides a
different perspective than the labyrinth, offering the user the possibility of
simultaneously constructing the labyrinth and viewing it from above. This
final section therefore explores how we might understand the infrapolitics of
the platform, and its role in the digital economy.

In its recent business strategy, Europeana claimed that it was moving from
operating as a “portal” to operating as a “platform.”92 The announcement was
part of a broader infrastructural transition in the field of cultural memory,
undergirded by a process of opening up and connecting the cultural memory
sector to wider knowledge ecosystems.93 Indeed, Europeana’s move is part of a
much larger discursive and material reality of a more fundamental process of
“platformization” of the web.94 The notion of the platform has thus recently
become an important heuristic for understanding the cultural development of
the web and its economy, fusing the computational understanding of the
platform as an environment in which a code is executed95 and the political and
social understanding of a platform as a site of politics.96

While the infrapolitics of the platformization of the web has become a central
discussion in software and communication studies, little interest has been
paid to the implications of platforms for the politics of cultural memory.
Yet, Europeana’s business strategy illustrates the significant infrapolitical
role that platforms are given in mass digitization literature. Citing digital
historian Tim Sherratt’s claim that “portals are for visiting, platforms for
building on,”97 Europeana’s strategy argues that if cultural memory sites free
themselves and their content from the “prison of portals” in favor of more
openness and flexibility, this will in turn empower users to created their own
“pathways” through the digital cultural memory, instead of being forced to
follow predetermined “narrative journeys.”98 The business plan’s reliance on
Sherratt’s theory of platforms shows that although the platform has a
technical meaning in computation, Europeana’s discourse goes beyond mere
computational logic. It instead signifies an infrapolitics that carries with
it an assumption about the political dynamics of software, standing in for the
freedom to act in the labyrinthine infrastructures of digital collections.

Yet, what is a platform, and how might we understand its infrapolitics? As
Tarleton Gillespie points out, the oldest definition of platform is
architectural, as a level or near-level surface, often elevated.99 As such,
there is something inherently simple about platforms. As architect Sverre Fehn
notes, “the simplest form of architecture is to cultivate the surface of the
earth, to make a platform.”100 Fehn’s statement conceals a more fundamental
insight about platforms, however: in the establishment of a low horizontal
platform, one also establishes a social infrastructure. Platforms are thus not
only material constructions, they also harbor infrapolitical affordances. The
etymology of the notion of “platform” evidences this infrapolitical dimension.
Originally a spatial concept, the notion of platform appeared in
architectural, figurative, and military formations in the sixteenth century,
soon developing into specialized discourses of party programs and military and
building construction,101 religious congregation,102 and architectural vantage
points.103 Both the architectural and social understandings of the term
connote a process in which sites of common ground are created in
contradistinction to other sites. In geology, for instance, platforms emerge
from abrasive processes that elevate and distinguish one area in relation to
others. In religious and political discourse, platforms emerge as
organizational sites of belonging, often in contradistinction to other forms
of organization. Platforms, then, connote both common ground and demarcated
borders that emerge out of abrasive processes. In the nineteenth century, a
third meaning adjoined the notion of platforms, namely trade-related
cooperation. This introduced a dynamic to the word that is less informed by
abrasive processes and more by the capture processes of what we might call
“connective capitalism.” Yet, despite connectivity taking center stage, even
these platforms were described as territorializing constructs that favor some
organizations and corporations over others.104

In the twentieth and twenty-first centuries, as Gilles Deleuze and Felix
Guattari successfully urged scholars and architects to replace roots with
rhizomes, the notion of platform began taking on yet another meaning. Deleuze
and Guattari began fervently arguing for the nonexistence of rooted
platforms.105 Their vision soon gave rise to a nonfoundational understanding
of the world as a “limitless multiplicity of positions from which it is
possible only to erect provisional constructions.”106 Deleuze and Guattari’s
ontology became widely influential in theorizing the web _in toto_ ; as Rem
Koolhaas once noted, the “language of architecture—platform, blueprint,
structure—became almost the preferred language for indicating a lot of
phenomenon that we’re facing from Silicon Valley.”107 From the singular
platforms of military and party politics, emerged, then, the thousand
platforms of the digital, where “nearly every surge of research and investment
pursued by the digital industry—e-commerce, web services, online advertising,
mobile devices and digital media sales—has seen the term migrate to it.”108

What infrapolitical logic can we glean from Silicon Valley’s adoption of the
vernacular notion of the platform? Firstly, it is an infrapolitics of
temporality. As Tarleton Gillespie points out, the semantic aspects of
platforms “point to a common set of connotations: a ‘raised level surface’
designed to facilitate some activity that will subsequently take place. It is
anticipatory, but not causal.”109 The inscription of platforms into the
material infrastructures of the Internet thus assume a value-producing
futurity. If serendipity is what is craved, then platforms are the site in
which this is thought to take place.

Despite its inclusion in the entrepreneurial discourse of Silicon Valley, the
notion of the platform is also used to signal an infrapolitics of
collaboration, even subversion. Olga Gurionova, for instance, explores the
subversive dynamics of critical artistic platforms,110 and Trebor Sholtz
promotes the term “platform cooperativism” to advance worker-based
cooperatives that would “design their own apps-based platforms, fostering
truly peer-to-peer ways of providing services and things, and speak truth to
the new platform capitalists.”111 Shadow libraries such as Monoskop appear as
perfect examples of such subversive platforms and evidence of Srnicek’s
reminder that not _all_ social interactions are co-opted into systems of
profit generation. 112 Yet, as the territorial, legal, and social
infrastructures of mass digitization become increasingly labyrinthine, it
takes a lot of critical consciousness to properly interpret and understand its
infrapolitics. Engage with the shadow library Library Genesis on Facebook, for
instance, and you submit to platform capitalism.

A significant trait of platform-based corporations such as Google and Facebook
is that they more often than not present themselves as apolitical, neutral,
and empowering tools of connectivity, passive until picked up by the user.
Yet, as Lisa Nakamura notes, “reading’s economies, cultures of sharing, and
circuits of travel have never been passive.”113 One of digital platforms’ most
important infrapolitical traits is their dependence on network effects and a
winner-takes-all logic, where the platform owner is not only conferred
enormous power vis-à-vis other less successful platforms but also vis-à-vis
the platform user.114 Within this game, the platform owner determines the
rules of the product and the service on offer. Entering into the discourse of
platforms implies, then, not only constructing a software platform, but also
entering into a parasitical game of relational network effects, where
different platforms challenge and use each other to gain more views and
activity. This gives successful platforms a great advantage in the digital
economy. They not only gain access to data, but they also control the rules of
how the data is to be managed and governed. Therefore, when a user is surfing
Google Books, Google—and not the library—collects the user’s search queries,
including results that appeared in searches and pages the user visited from
the search. The browser, moreover, tracks the user’s activity, including pages
the user has visited and when, user data, and possibly user login details with
auto-fill features, user IP address, Internet service provider, device
hardware details, operating system and browser version, cookies, and cached
data from websites. The labyrinthine infrastructure of the mass digitization
ecosystem also means that if you access one platform through another, your
data will be collected in different ways. Thus, if you visit Europeana through
Facebook, it will be Facebook that collects your data, including name and
profile; biographical information such as birthday, hometown, work history,
and interests; username and unique identifier; subscriptions, location,
device, activity date, time and time-zone, activities; and likes, check-ins,
and events.115 As more platforms emerge from which one can access mass
digitized archives, such as social media sites like Facebook, Google+,
Pinterest, and Twitter, as well as mobile devices such as Android, gaining an
overview of who collects one’s data and how becomes more nebulous.

Europeana’s reminder illustrates the assemblatic infrastructural set-up of
mass digitization projects and how they operate with multiple entry points,
each of which may attach its own infrapolitical dynamics. It also illustrates
the labyrinthine infrastructures of privacy settings, over which a mapping is
increasingly difficult to attain because of constant changes and
reconfigurations. It furthermore illustrates the changing legal order from the
relatively stable sovereign order of human rights obligations to the
modulating landscape of privacy policies.

How then might we characterize the infrapolitics of the spatial imaginaries of
mass digitization? As this chapter has sought to convey, writings about mass
digitization projects are shot through with spatialized metaphors, from the
flaneur to the labyrinth and the platform, either in literal terms or in the
imaginaries they draw on. While this section has analyzed these imaginaries in
a somewhat chronological fashion, with the interactivity of the platform
increasingly replacing the more passive gaze of the spectator, they coexist in
that larger complex of spatial digital thinking. While often used to elicit
uncomplicated visions of empowerment, desire, curiosity, and productivity,
these infrapolitical imaginaries in fact show the complexity of mass
digitization projects in their reinscription of users and cultural memory
institutions in new constellations of power and politics.

## Notes

1. Kelly 1994, p. 263. 2. Connection Machines were developed by the
supercomputer manufacturer Thinking Machines, a concept that also appeared in
Jorge Luis Borges’s _The Total Library_. 3. Brewster Kahle, “Transforming Our
Libraries from Analog to Digital: A 2020 Vision,” _Educause Review_ , March
13, 2017, from-analog-to-digital-a-2020-vision>. 4. Ibid. 5. Couze Venn, “The
Collection,” _Theory, Culture & Society_ 23, no. 2–3 (2006), 36. 6. Hacking
2010. 7. Lefebvre 2009. 8. Blair and Stallybrass 2010, 139–163. 9. Ibid., 143.
10. Dewey 1926, 311. 11. See, for instance, Lorraine Daston’s wonderful
account of the different types of historical consciousness we find in archives
across the sciences: Daston 2012. 12. David Weinberger, “Library as Platform,”
_Library Journal_ , September 4, 2012, /future-of-libraries/by-david-weinberger/#_>. 13. Nakamura 2002, 89. 14.
Shannon Mattern,”Library as Infrastructure,” _Places Journal_ , June 2014,
. 15. Couze
Venn, “The Collection,” _Theory, Culture & Society_ 23, no. 2–3 (2006), 35–40.
16. Žižek 2009, 39. 17. Voltaire, “Une grande bibliothèque a cela de bon,
qu’elle effraye celui qui la regarde,” in _Dictionaire Philosophique_ , 1786,
265. 18. In his autobiography, Borges asserted that it “was meant as a
nightmare version or magnification” of the municipal library he worked in up
until 1946. Borges describes his time at this library as “nine years of solid
unhappiness,” both because of his co-workers and the “menial” and senseless
cataloging work he performed in the small library. Interestingly, then, Borges
translated his own experience of being informationally underwhelmed into a
tale of informational exhaustion and despair. See “An Autobiographical Essay”
in _The Aleph and Other Stories_ , 1978, 243. 19. Borges 2001, 216. 20. Yeo
2003, 32. 21. Cited in Blair 2003, 11. 22. Bawden and Robinson 2009, 186. 23.
Garrett 1999. 24. Featherstone 2000, 166. 25. Thus, for instance, one
Europeana-related project with the apt acronym PATHS, argues for the need to
“make use of current knowledge of personalization to develop a system for
navigating cultural heritage collections that is based around the metaphor of
paths and trails through them” (Hall et al. 2012). See also Walker 2006. 26.
Inspiring texts for (early) spatial thinking of the Internet, see: Hayles
1993; Nakamura 2002; Chun 2006. 27. Much has been written about whether or not
it makes sense to frame digital realms and infrastructures in spatial terms,
and Wendy Chun has written an excellent account of the stakes of these
arguments, adding her own insightful comments to them; see chapter 1, “Why
Cyberspace?” in Chun 2013. 28. Cited in Hartmann 2004, 123–124. 29. Goldate
1996. 30. Featherstone 1998. 31. Dörk, Carpendale, and Williamson 2011, 1216.
32. Wilson 1992, 108. 33. Benjamin. 1985a, 40. 34. See, for instance, Natasha
Dow Schüll’s fascinating study of the addictive design of computational
culture: Schüll 2014. For an industry perspective, see Nir Eyal, _Hooked: How
to Build Habit-Forming Products_ (Princeton, NJ: Princeton University Press,
2014). 35. Wilson 1992, 93. 36. Indeed, it would be interesting to explore the
link between Susan Buck Morss’s reinterpretation of Benjamin’s anesthetic
shock of phantasmagoria and today’s digital dopamine production, as described
by Natasha Dow Schüll in _Addicted by Design_ (2014); see Buck-Morss 2006. See
also Bjelić 2016. 37. Wolff 1985; Pollock 1998. 38. Wilson 1992; Nord 1995;
Nava and O’Shea 1996, 38–76. 39. Hartmann 1999. 40. Smalls 2003, 356. 41.
Ibid., 357. 42. Cadogan 2016. 43. Marian Ryan, “The Disabled flaneur,” _New
York Times_ , December 12, 2017, /the-disabled-flaneur.html>. 44. Benjamin. 1985b, 54. 45. Evgeny Morozov, “The
Death of the Cyberflaneur,” _New York Times_ , February 4, 2012. 46. Eco 2014,
169. 47. See also Koevoets 2013. 48. In colloquial English, “labyrinth” is
generally synonymous with “maze,” but some people observe a distinction, using
maze to refer to a complex branching (multicursal) puzzle with choices of path
and direction, and using labyrinth for a single, non-branching (unicursal)
path, which leads to a center. This book, however, uses the concept of the
labyrinth to describe all labyrinthine infrastructures. 49. Doob 1994. 50.
Bloom 2009, xvii. 51. Might this be the labyrinthine logic detected by
Foucault, which unfolds only “within a hidden landscape,” revealing “nothing
that can be seen” and partaking in the “order of the enigma”; see Foucault
2004, 98. 52. Doob 1994, 97. Doob also finds this perspective in the
fourteenth century in Chaucer’s _House of Fame_ , in which the labyrinth
“becomes an emblem of the limitations of knowledge in this world, where all we
can finally do is meditate on _labor intus_ ” (ibid., 313). Lady Mary Wroth’s
work _Pamphilia to Amphilanthus_ provides the same imagery, telling the story
of the female heroine, Pamphilia, who fails to escape a maze but nevertheless
engages her experience within it as a source of knowledge. 53. Galloway 2013a,
29. 54. van Dijck 2012. 55. “Usage Stats for Europeana Collections,”
_EuropeanaPro,_ usage-statistics>. 56. Joris Pekel, “The Europeana Statistics Dashboard is
here,” _EuropeanaPro_ , April 6, 2016, /introducing-the-europeana-statistics-dashboard>. 57. Bates 2002, 32. 58. Veel
2003, 154. 59. Deleuze 2013, 56. 60. Interview with professor of library and
information science working with Europeana, Berlin, Germany, 2011. 61. Borges
mused upon the possible horrendous implications of such a lack, recounting two
labyrinthine scenarios he once imagined: “In the first, a man is supposed to
be making his way through the dusty and stony corridors, and he hears a
distant bellowing in the night. And then he makes out footprints in the sand
and he knows that they belong to the Minotaur, that the minotaur is after him,
and, in a sense, he, too, is after the minotaur. The Minotaur, of course,
wants to devour him, and since his only aim in life is to go on wandering and
wandering, he also longs for the moment. In the second sonnet, I had a still
more gruesome idea—the idea that there was no minotaur—that the man would go
on endlessly wandering. That may have been suggested by a phrase in one of
Chesterton’s Father Brown books. Chesterton said, ‘What a man is really afraid
of is a maze without a center.’ I suppose he was thinking of a godless
universe, but I was thinking of the labyrinth without a minotaur. I mean, if
anything is terrible, it is terrible because it is meaningless.” Borges and
Dembo 1970, 319. 62. Borges actually found a certain pleasure in the lack of
order, however, noting that “I not only feel the terror … but also, well, the
pleasure you get, let’s say, from a chess puzzle or from a good detective
novel.” Ibid. 63. Serendib, also spelled Serendip (Arabic Sarandīb), was the
Persian/Arabic word for the island of Sri Lanka, recorded in use as early as
AD 361. 64. Letter to Horace Mann, 28 January 1754, in _Walpole’s
Correspondence_ , vol. 20, 407–411. 65. As Robert Merton and Elinor Barber
note, it first made it into the OED in 1912 (Merton and Barber 2004, 72). 66.
Merton and Barber 2004, 40. 67. Lorraine Daston, “Are You Having Fun Today?,”
_London Review of Books_ , September 23, 2004. 68. Ibid. 69. Ibid. 70.
Featherstone 2000, 594. 71. Nancy Lusignan Schulz, “Serendipity in the
Archive,” _Chronicle of Higher Education_ , May 15, 2011,
. 72.
Verhoeven 2016, 18. 73. Caley 2017, 248. 74. Bishop 2016 75. “Oxford-Google
Digitization Project Reaches Milestone,” Bodleian Library and Radcliffe
Camera, March 26, 2009.
. 76. Timothy
Hill, David Haskiya, Antoine Isaac, Hugo Manguinhas, and Valentine Charles
(eds.), _Europeana Search Strategy_ , May 23, 2016,
.
77. “DPLAbot,” _Digital Public Library of America_ , .
78. “Q&A with EuropeanaBot developer,” _EuropeanaPro_ , August 20, 2013,
. 79. There
are of course many other examples, some of which offer greater interactivity,
such as the TroveNewsBot, which feeds off of the National Library of
Australia’s 370 million resources, allowing the user to send the bot any text
to get the bot digging through the Trove API for a matching result. 80.
Serendip-o-matic, n.d. . 81. Tristram Hunt,
“Online Is Fine, but History Is Best Hands On,” _Guardian_ July 3, 2011,
library-google-history>. 82. Davison 2009. 83. William McKeen, “Serendipity,”
_New York Times,_ (n.d.),
. 84. Carr 2006.
We find this argument once again in Aleks Krotoski, who highlights the man-
machine dichotomy, noting that the “controlled binary mechanics” of the search
engine actually make serendipitous findings “more challenging to find” because
“branching pathways of possibility are too difficult to code and don’t scale”
(Aleks Krokoski, “Digital serendipity: be careful what you don't wish for,”
_Guardian_ , August 11, 2011,
profiling-aleks-krotoski>.) 85. Lorraine Daston, “Are You Having Fun Today?,”
_London Review of Books_ , September 23, 2004. 86. Dan Cohen, “Planning for
Serendipity,” _DPLA_ News and Blog, February 7, 2014,
. 87. Shannon
Mattern, “Sharing Is Tables,” _e-flux_ , October 17, 2017,
furniture-for-digital-labor/>. 88. Greg Lindsay, “Engineering Serendipity,”
_New York Times_ , April 5, 2013,
serendipity.html>. 89. Gillespie 2017. 90. See, for instance, Milena Popova,
“Facebook Awards History App that Will Use Europeana’s Collections,”
_EuropeanaPro_ , March 7, 2014, awards-history-app-that-will-use-europeanas-collections>. 91. Doob 1994. 92.
“Europeana Strategy Impact 2015–2020,”
.
93. Ping-Huang 2016, 53. 94. Helmond 2015. 95. Ian Bogost and Nick Montfort.
2009. “Platform studies: freduently asked questions.” _Proceeding of the
Digital Arts and Culture Conference_.
. 96. Srnicek 2017; Helmond 2015;
Gillespie 2010. 97. “While a portal can present its aggregated content in a
way that invites exploration, the experience is always constrained—pre-
determined by a set of design decisions about what is necessary, relevant and
useful. Platforms put those design decisions back into the hands of users.
Instead of a single interface, there are innumerable ways of interacting with
the data.” See Tim Sherratt, “From Portals to Platforms; Building New
Frameworks for User Engagement,” National Library of Australia, November 5,
2013, platform>. 98. “Europeana Strategy Impact 2015–2020,”
.
99. Gillespie 2010, 349. 100. Fjeld and Fehn 2009, 108. 101. Gießmann 2015,
126. 102. See, for example, C. S. Lewis’s writings on Calvinism in _English
Literature in the Sixteenth Century Excluding Drama_. Or how about
Presbyterian minster Lyman Beecher, who once noted in a sermon: “in organizing
any body, in philosophy, religion, or politics, you must _have_ a platform;
you must stand somewhere; on some solid ground.” Such a platform could gather
people, so that they could “settle on principles just as … bees settle in
swarms on the branches, fragrant with blossoms and flowers.” See Beecher 2012,
21. 103. “Platform, in architecture, is a row of beams which support the
timber-work of a roof, and lie on top of the wall, where the entablature ought
to be raised. This term is also used for a kind of terrace … from whence a
fair prospect may be taken of the adjacent country.” See Nicholson 1819. 104.
As evangelist Calvin Colton noted in his work on the US’s public economy, “We
find American capital and labor occupying a very different position from that
of the same things in Europe, and that the same treatment applied to both,
would not be beneficial to both. A system which is good for Great Britain may
be ruinous to the United States. … Great Britain is the only nation that is
prepared for Free Trade … on a platform of universal Free Trade, the advanced
position of Great Britain … in her skill, machinery, capital and means of
commerce, would make all the tributary to her; and on the same platform, this
distance between her and other nations … instead of diminishing, would be
forever increasing, till … she would become the focus of the wealth, grandeur,
and power of the world.” 105. Deleuze and Guattari 1987. 106. Solá-Morales
1999, 86. 107. Budds 2016. 108. Gillespie 2010, 351. 109. Gillespie 2010, 350.
Indeed, it might be worth resurrecting the otherwise-extinct notion of
“plotform” to reinscribe agency and planning into the word. See Tawa 2012.
110. As Olga Gurionova points out, platforms have historically played a
significant role in creative processes as a “set of shared resources that
might be material, organizational, or intentional that inscribe certain
practices and approaches in order to develop collaboration, production, and
the capacity to generate change.” Indeed, platforms form integral
infrastructures in the critical art world for alternative systems of
organization and circulation that could be mobilized to “disrupt
institutional, representational, and social powers.” See Olga Goriunova, _Art
Platforms and Cultural Production on the Internet_ (New York: Routledge,
2012), 8. 111. Trebor Scholz, “Platform Cooperativism vs. the Sharing
Economy,” _Medium_ , December 5, 2016, cooperativism-vs-the-sharing-economy-2ea737f1b5ad>. 112. Srnicek 2017, 28–29.
113. Nakamura 2013, 243. 114. John Zysman and Martin Kennedy, “The Next Phase
in the Digital Revolution: Platforms, Automation, Growth, and Employment,”
_ETLA Reports_ 61, October 17, 2016, /ETLA-Raportit-Reports-61.pdf>. 115. Europeana’s privacy page explicitly notes
this, reminding the user that, “this site may contain links to other websites
that are beyond our control. This privacy policy applies solely to the
information you provide while visiting this site. Other websites which you
link to may have privacy policies that are different from this Privacy
Policy.” See “Privacy and Terms,” _Europeana Collections_ ,
.

# 6
Concluding Remarks

I opened this book claiming that the notion of mass digitization has shifted
from a professional concept to a cultural political phenomenon. If the former
denotes a technical way of duplicating analog material in digital form, mass
digitization as a cultural practice is a much more complex apparatus. On the
one hand, it offers the simple promise of heightened public and private access
to—and better preservation of—the past; one the other, it raises significant
political questions about ethics, politics, power, and care in the digital
sphere. I locate the emergence of these questions within the infrastructures
of mass digitization and the ways in which they not only offer new ways of
reading, viewing, and structuring cultural material, but also new models of
value and its extraction, and new infrastructures of control. The political
dynamic of this restructuring, I suggest, may meaningfully be referred to as a
form of infrapolitics, insofar as the political work of mass digitization
often happens at the level of infrastructure, in the form of standardization,
dissent, or both. While mass digitization entwines the cultural politics of
analog artifacts and institutions with the infrapolitical logics of the new
digital economies and technologies, there is no clear-cut distinction between
between the analog and digital realms in this process. Rather, paraphrasing N.
Katherine Hayles, I suggest that mass digitization, like a Janus-figure,
“looks to past and future, simultaneously reinforcing and undermining both.”1

A persistent challenge in the study of mass digitization is the mutability of
the analytical object. The unstable nature of cultural memory archives is not
a new phenomenon. As Derrida points out, they have always been haunted by an
unintended instability, which he calls “archive fever.” Yet, mass digitization
appears to intensify this instability even further, both in its material and
cultural instantiations. Analog preservation practices that seek to stabilize
objects are in the digital realm replaced with dynamic processes of content
migration and software updates. Cultural memory objects become embedded in
what Wendy Chun has referred to as the enduring ephemerality of the digital as
well as the bleeding edge of obsolescence.2

Indeed, from the moment when the seed for this book was first planted to the
time of its publication, the landscape of mass digitization, and the political
battles waged on its maps, has changed considerably. Google Books—which a
decade ago attracted the attention, admiration, and animosity of all—recently
metamorphosed from a giant flood to a quiet trickle. After a spectacle of
press releases on quantitative milestones, epic legal battles, and public
criticisms, Google apparently lost interest in Google Books. Google’s gradual
abandonment of the project resembled more an act of prolonged public ghosting
than a clear-cut break-up, leaving the public to read in between the lines
about where the company was headed: scanning activities dwindled; the Google
Books blog closed along with its Twitter feed; press releases dried up; staff
was laid off; and while scanning activities are still ongoing, they are
limited to works in the public domain, changing the scale considerably.3 One
commentator diagnosed the change of strategy as the demise of “the greatest
humanistic project of our time.”4 Others acknowledged in less dramatic terms
that while Google’s scanning activities may have stopped, its legacy lives on
and is still put to active use.5

In the present context, the important point to make is that a quiet life does
not necessarily equal death. Indeed, this is the lesson we learn from
attending to the subtle workings of infrastructure: the politics of
infrastructure is the politics of what goes on behind the curtains, not only
what is launched to the front page. Thus, as one engineer notes when
confronted with the fate of Google Books, “We’re not focused on shiny features
and things that are very visible to users. … It’s more like behind-the-scenes
work and perfecting the technology—acquiring content, processing it properly
so that we can view the entire book online, and adjusting the search
algorithm.”6 This is a timely reminder that any analysis of the infrapolitics
of mass digitization has to tend not only to the visible and loud politics of
construction, but also the quiet and ongoing politics of infrastructure
maintenance. It makes no sense to write an obituary for Google Books if the
infrastructure is still at work. Moreover, the assemblatic nature of mass
digitization also demands that we do not stop at the immediate borders of a
project when making analytical claims about their infrapolitics. Thus, while
Google Books may have stopped in its tracks, other trains of mass digitization
have pulled up instead, carrying the project of mass digitization forward
toward new, divergent, and experimental sites. Google’s different engagements
with cultural digitization shows that an analysis of the politics of Google’s
memory work needs to operate with an assemblatic method, rather than a
delineating approach.7 Europeana and DPLA also are mutable analytical objects,
both in economic and cultural form. Therefore, Europeana leads a precarious
life from one EU budget framework to the next, and its cultural identity and
software instantiations have transformed from a digital library, to a portal,
to a platform over the course of only a few decades. Last, but not least,
shadow libraries are mediating and multiplying cultural memory objects from
servers and mirror links that sometimes die just as quickly as they emerged.
The question of institutionalization matters greatly in this respect,
outlining what we might call a spectrum of contingency. If a mass digitization
project lives in the margins of institutions, such as in the case of many
shadow libraries, its infrastructure is often fraught with uncertainties. Less
precarious, but nonetheless tumultuous, are the corporate institutions with
their increasingly short market-driven lifespans. And, at the other end of the
spectrum, we find mass digitization projects embedded in bureaucratic
apparatuses whose lumbering budget processes provide publically funded mass
digitization projects with more stable infrastructures.

The temporal dimension of mass digitization projects also raises important
questions about the horizon of cultural memory in material terms. Should mass
digitization, one might ask, also mean whither analog cultural memory? This
question seems relevant not least in cases where institutions consider
digitization as a form of preservation that allows them to discard analog
artifacts once digitized. In digital form, we further have to contend with a
new temporal horizon of cultural memory itself, based not on only on
remembrance but on anticipation in the manner of “If you liked this, you might
also like. ….” Thus, while cultural memory objects link to objects of the
past, mass digitized cultural memory also gives rise to new methods of
prediction and preemption, for instance in the form of personalization. In
this anticipatory regime, cultural memory becomes subject to perpetual
calculatory activities, processing affects, and activities in terms of
likelihoods and probabilistic outcomes.

Thus, cultural memory has today become embedded in new glocalized
infrastructures. On the one hand, these infrastructures present novel
opportunities. Cultural optimists have suggested that mass digitization has
the potential to give rise to new cosmopolitan public spheres tethered from
the straitjackets of national territorializing forces. On the other hand,
critics argue that there is little evidence that cosmopolitan dynamics are in
fact at work. Instead, new colonial and neoliberal platforms arise from a
complex infrastructural apparatus of private and public institutions and
become shaped by political, financial, and social struggles over
representation, control, and ownership of knowledge.

In summary, it is obvious that the scale of mass digitization, public and
private, licit and illicit, has transformed how we engage with texts, cultural
works, and cultural memory. People today have instant access to a wealth of
works that would previously have required large amounts of money, as well as
effort, to engage with. Most of us enjoy the new cultural freedoms we have
been given to roam the archives, collecting and exploring oddities along the
way, and making new connections between works that would previously have been
held separate by taxonomy, geography, and time in the labyrinthine material
and social infrastructures of cultural memory.

A special attraction of mass digitization no doubt lies in its unfathomable
scale and linked nature, and the fantasy and “spectacle of collecting.”8 The
new cultural environment allows the user to accelerate the pace of information
by accessing key works instantly as well as idly rambling in the exotic back
alleys of digitized culture. Mass digitized archives can be explored to
functional, hedonistic, and critical ends (sometimes all at the same time),
and can be used to exhume forgotten works, forgotten authors, and forgotten
topics. Within this paradigm, the user takes center stage—at least
discursively. Suddenly, a link made between a porn magazine and a Courbet
painting could well be a valued cultural connection instead of a frowned-upon
transgression in the halls of high culture. Users do not just download books;
they also upload new folksonomies, “ego-documents,” and new cultural
constellations, which are all welcomed in the name of “citizen science.”
Digitization also infuses texts with new life due to its new connective
properties that allow readers and writers to intimately and
exhibitionistically interact around cultural works, and it provides new ways
of engaging with texts as digital reading migrates toward service-based rather
than hardware-based models of consumption. Digitization allows users to
digitally collect works themselves and indulge in alluring archival riches in
new ways.

But mass digitization also gives rise to a range of new ethical, political,
aesthetic, and methodological questions concerning the spatio-temporality,
ownership, territoriality, re-use, and dissemination of cultural memory
artifacts. Some of those dimensions have been discussed in detail in the
present work and include questions about digital labor, platformization,
management of visibility, ownership, copyright, and other new forms of control
and de- and recentralization and privatization processes. Others have only
been alluded to but continue to gain in relevance as processes of mass
digitization excavate and make public sensitive and contested archival
material. Thus, as the cultural memories and artifacts of indigenous
populations, colonized territories and other marginalized groups are brought
online, as well as artifacts that attest to the violent regimes of colonialism
and patriarchy, an attendant need has emerged for an ethics of care that goes
beyond simplistic calls for right to access, to instead attend to the
sensitivity of the digitized material and the ways in which we encounter these
materials.

Combined, these issues show that mass digitization is far from a
straightforward technical affair. Rather, the productive dimensions of mass
digitization emerge from the rubble of disruptive and turbulent political
processes that violently dislocate established frontiers and power dynamics
and give rise to new ones that are yet to be interpreted. Within these
turbulent processes, the familiar narratives of empowered users collecting and
connecting works and ideas in new and transgressive ways all too often leave
out the simultaneous and integrated story of how the labyrinthine
infrastructures of mass digitization also writes itself on the back of the
users, collecting them and their thoughts in the process, and subjecting them
to new economic logics and political regimes. As Lisa Nakamura reminds us, “by
availing ourselves of its networked virtual bookshelves to collect and display
our readerliness in a postprint age, we have become objects to be collected.”9
Thus, as we gather vintage images on Pinterest, collect books in Google Books,
and retweet sounds files from Europeana, we do best not only to question the
cultural logic and ethics of these actions but also to remember that as we
collect and connect, we are also ourselves collected and connected.

If the power of mass digitization happens at the level of infrastructure,
political resistance will have to take the form of infrastructural
intervention. We play a role in the formulation of the ethics of such
interventions, and as such we have to be willing to abandon the predominant
tropes of scale, access, and acceleration in favor of an infrapolitics of
care—a politics that offers opportunities for mindful, slow, and focused
encounters.

## Notes

1. Hayles 1999, 17. 2. Chun. 2008; Chun 2017. 3. Murrell 2017. 4. James
Somers, “Torching the Modern-Day Library of Alexandria,” _The Atlantic_ ,
April 20, 2017. 5. Jennifer Howard, “What Happened to Google’s Effort to Scan
Millions of University Library Books?,” _EdSurge_ , August 10, 2017,
scan-millions-of-university-library-books>. 6. Scott Rosenberg, “How Google
Books Got Lost,” _Wired_ , November 4, 2017, /how-google-book-search-got-lost>. 7. What to make, for instance, of the new
trend of employing Google’s neural networks to find one’s museum doppelgänger
from the company’s image database? Or the fact that Google Cultural Institute
is consistently turning out new cultural memory hacks such as its cardboard VR
glasses, its indoor mapping of museum spaces, and its gigapixel Art Camera
which reproduces artworks in uncanny detail. Or the expansion of their remit
from cultural memory institutions to also encompass natural history museums?
See, for example, Adrien Chen, “The Google Arts & Culture App and the Rise of
the ‘Coded Gaze,’” _New Yorker_ , January 26, 2018,
the-rise-of-the-coded-gaze-doppelganger>. 8. Nakamura 2013, 240. 9. Ibid.,
241.

#
References

1. Abbate, Janet. 2012. _Recoding Gender: Women’s Changing Participation in Computing_. Cambridge, MA: MIT Press.
2. Abrahamsen, Rita, and Michael C. Williams. 2011. _Security beyond the State: Private Security in International Politics_. Cambridge: Cambridge University Press.
3. Adler-Nissen, Rebecca, and Thomas Gammeltoft-Hansen. 2008. _Sovereignty Games: Instrumentalizing State Sovereignty in Europe and Beyond_. New York: Palgrave Macmillan.
4. Agre, Philip E. 2000. “The Market Logic of Information.” _Knowledge, Technology & Policy_ 13 (3): 67–77.
5. Aiden, Erez, and Jean-Baptiste Michel. 2013. _Uncharted: Big Data as a Lens on Human Culture_. New York: Riverhead Books.
6. Ambati, Vamshi, N. Balakrishnan, Raj Reddy, Lakshmi Pratha, and C. V. Jawahar. 2006. “The Digital Library of India Project: Process, Policies and Architecture.” _CiteSeer_. .
7. Amoore, Louise. 2013. _The Politics of Possibility: Risk and Security beyond Probability_. Durham, NC: Duke University Press.
8. Anderson, Ben, and Colin McFarlane. 2011. “Assemblage and Geography.” _Area_ 43 (2): 124–127.
9. Anderson, Benedict. 1991. _Imagined Communities: Reflections on the Origin and Spread of Nationalism_. London: Verso.
10. Arms, William Y. 2000. _Digital Libraries_. Cambridge, MA: MIT Press.
11. Arvanitakis, James, and Martin Fredriksson. 2014. _Piracy: Leakages from Modernity_. Sacramento, CA: Litwin Books.
12. Association of Research Libraries. 2009. “ARL Encourages Members to Refrain from Signing Nondisclosure or Confidentiality Clauses.” _ARL News_ , June 5.
13. Auletta, Ken. 2009. _Googled: The End of the World As We Know It_. New York: Penguin Press.
14. Baker, Nicholson. 2002. _The Double Fold: Libraries and the Assault on Paper_. London: Vintage Books.
15. Barthes, Roland. 1977. “From Work to Text” and “The Grain of the Voice.” In _Image Music Text_ , ed. Roland Barthes. London: Fontana Press.
16. Barthes, Roland. 1981. _Camera Lucida: Reflections on Photography_. New York: Hill and Wang.
17. Bates, David W. 2002. _Enlightenment Aberrations: Error and Revolution in France_. Ithaca, NY: Cornell University Press.
18. Batt, William H. 1984. “Infrastructure: Etymology and Import.” _Journal of Professional Issues in Engineering_ 110 (1): 1–6.
19. Bawden, David, and Lyn Robinson. 2009. “The Dark Side of Information: Overload, Anxiety and Other Paradoxes and Pathologies.” _Journal of Information Science_ 35 (2): 180–191.
20. Beck, Ulrick. 1996. “World Risk Society as Cosmopolitan Society? Ecological Questions in a Framework of Manufactured Uncertainties.” _Theory, Culture & Society_ 13 (4), 1–32.
21. Beecher, Lyman. 2012. _Faith Once Delivered to the Saints: A Sermon Delivered at Worcester, Mass., Oct. 15, 1823._ Farmington Hills, MI: Gale, Sabin Americana.
22. Belder, Lucky. 2015. “Cultural Heritage Institutions as Entrepreneurs.” In _Cultivate!: Cultural Heritage Institutions, Copyright & Cultural Diversity in the European Union & Indonesia_, eds. M. de Cock Buning, R. W. Bruin, and Lucky Belder, 157–196. Amsterdam: DeLex.
23. Benjamin, Walter. 1985a. “Central Park.” _New German Critique, NGC_ 34 (Winter): 32–58.
24. Benjamin, Walter. 1985b. “The flaneur.” In _Charles Baudelaire: a Lyric Poet in the Era of High Capitalism_. Translated by Harry Zohn. London: Verso.
25. Benjamin, Walter. 1999. _The Arcades Project_. Cambridge, MA: Harvard University Press.
26. Béquet, Gaëlle. 2009. _Digital Library as a Controversy: Gallica vs Google_. Proceedings of the 9th Conference Libraries in the Digital Age (Dubrovnik, Zadar, May 25–29, 2009). .
27. Berardi, Franco, Gary Genosko, and Nicholas Thoburn. 2011. _After the Future_. Edinburgh, UK: AK Press.
28. Berk, Hillary L. 2015. “The Legalization of Emotion: Managing Risk by Managing Feelings in Contracts for Surrogate Labor.” _Law & Society Review_ 49 (1): 143–177.
29. Bishop, Catherine. 2016. “The Serendipity of Connectivity: Piecing Together Women’s Lives in the Digital Archive.” _Women’s History Review_ 26 (5): 766–780.
30. Bivort, Olivier. 2013. “ _Le romantisme et la ‘langue de Voltaire_.’” Revue Italienne d’études Françaises, 3. DOI: 10.4000/rief.211.
31. Bjelić, Dušan I. 2016. _Intoxication, Modernity, and Colonialism: Freud’s Industrial Unconscious, Benjamin’s Hashish Mimesis_. New York: Palgrave Macmillan.
32. Blair, Ann, and Peter Stallybrass. 2010. “Mediating Information, 1450–1800”. In _This Is Enlightenment_ , eds. Clifford Siskin and William B. Warner. Chicago: University of Chicago Press.
33. Blair, Ann. 2003. “Reading Strategies for Coping with Information Overload ca. 1550–1700.” _Journal of the History of Ideas_ 64 (1): 11–28.
34. Bloom, Harold. 2009. _The Labyrinth_. New York: Bloom’s Literary Criticism.
35. Bodó, Balazs. 2015. “The Common Pathways of Samizdat and Piracy.” In _Samizdat: Between Practices and Representations_ , ed. V. Parisi. Budapest: CEU Institute for Advanced Study. Available at SSRN; .
36. Bodó, Balazs. 2016. “Libraries in the Post-Scarcity Era.” In _Copyrighting Creativity: Creative Values, Cultural Heritage Institutions and Systems of Intellectual Property_ , ed. Helle Porsdam. New York: Routledge.
37. Bogost, Ian, and Nick Montfort. 2009. “Platform Studies: Frequently Asked Questions.” _Proceeding of the Digital Arts and Culture Conference_. .
38. Borges, Jorge Luis. 1978. “An Autobiographical Essay.” In _The Aleph and Other Stories, 1933–1969: Together with Commentaries and an Autobiographical Essay_. New York: E. P. Dutton.
39. Borges, Jorge Luis. 2001. “The Total Library.” In _The Total Library: Non-fiction 1922–1986_. London: Penguin.
40. Borges, Jorge Luis, and L. S. Dembo. 1970. “An Interview with Jorge Luis Borges.” _Contemporary Literature_ 11 (3): 315–325.
41. Borghi, Maurizio. 2012. “Knowledge, Information and Values in the Age of Mass Digitisation.” In _Value: Sources and Readings on a Key Concept of the Globalized World_ , ed. Ivo de Gennaro. Leiden, the Netherlands: Brill.
42. Borghi, Maurizio, and Stavroula Karapapa. 2013. _Copyright and Mass Digitization: A Cross-Jurisdictional Perspective_. Oxford: Oxford University Press.
43. Borgman, Christine L. 2015. _Big Data, Little Data, No Data: Scholarship in the Networked World_. Cambridge, MA: MIT Press.
44. Bottando, Evelyn. 2012. _Hedging the Commons: Google Books, Libraries, and Open Access to Knowledge_. Iowa City: University of Iowa.
45. Bowker, Geoffrey C., Karen Baker, Florence Millerand, and David Ribes. 2010. “Toward Information Infrastructure Studies: Ways of Knowing in a Networked Environment.” In _The International Handbook of Internet Research_ , eds. Hunsinger Lisbeth Klastrup Jeremy and Matthew Allen. Dordrecht, the Netherlands: Springer.
46. Bowker, Geoffrey C, and Susan L. Star. 1999. _Sorting Things Out: Classification and Its Consequences_. Cambridge, MA: MIT Press.
47. Brin, Sergey. 2009. “A Library to Last Forever.” _New York Times_ , October 8.
48. Brin, Sergey, and Lawrence Page. 1998. “The Anatomy of a Large-Scale Hypertextual Web Search Engine.” _Computer Networks and ISDN Systems_ 30 (1–7): 107. .
49. Buckholtz, Alison. 2016. “New Ideas for Financing American Infrastructure: A Conversation with Henry Petroski.” _World Bank Group, Public-Private Partnerships Blog_ , March 29.
50. Buck-Morss, Susan. 2006. “The flaneur, the Sandwichman and the Whore: The Politics of Loitering.” _New German Critique_ (39): 99–140.
51. Budds, Diana. 2016. “Rem Koolhaas: ‘Architecture Has a Serious Problem Today.’” _CoDesign_ 21 (May). .
52. Burkart, Patrick. 2014. _Pirate Politics: The New Information Policy Contests_. Cambridge, MA: MIT Press.
53. Burton, James, and Daisy Tam. 2016. “Towards a Parasitic Ethics.” _Theory, Culture & Society_ 33 (4): 103–125.
54. Busch, Lawrence. 2011. _Standards: Recipes for Reality_. Cambridge, MA: MIT Press.
55. Caley, Seth. 2017. “Digitization for the Masses: Taking Users Beyond Simple Searching in Nineteenth-Century Collections Online.” _Journal of Victorian Culture : JVC_ 22 (2): 248–255.
56. Cadogan, Garnette. 2016. “Walking While Black.” Literary Hub. July 8. .
57. Callon, Michel, Madeleine Akrich, Sophie Dubuisson-Quellier, Catherine Grandclément, Antoine Hennion, Bruno Latour, Alexandre Mallard, et al. 2016. _Sociologie des agencements marchands: Textes choisis_. Paris: Presses des Mines.
58. Cameron, Fiona, and Sarah Kenderdine. 2007. _Theorizing Digital Cultural Heritage: A Critical Discourse_. Cambridge, MA: MIT Press.
59. Canepi, Kitti, Becky Ryder, Michelle Sitko, and Catherine Weng. 2013. _Managing Microforms in the Digital Age_. Association for Library Collections & Technical Services. .
60. Carey, Quinn Ann. 2015, “Maksim Moshkov and lib.ru: Russia’s Own ‘Gutenberg.’” _TeleRead: Bring the E-Books Home_. December 5. .
61. Carpentier, Nico. 2011. _Media and Participation: A Site of Ideological-Democratic Struggle_. Bristol, UK: Intellect.
62. Carr, Nicholas. 2006. “The Engine of Serendipity.” _Rough Type_ , May 18.
63. Cassirer, Ernst. 1944. _An Essay on Man: An Introduction to a Philosophy of Human Culture_. New Haven, CT: Yale University Press.
64. Castells, Manuel. 1996a. _The Rise of the Network Society_. Malden, MA: Blackwell Publishers.
65. Castells, Manuel. 1996b. _The Informational City: Information Technology, Economic Restructuring, and the Urban-Regional Process_. Cambridge: Blackwell.
66. Castells, Manuel, and Gustavo Cardoso. 2012. “Piracy Cultures: Editorial Introduction.” _International Journal of Communication_ 6 (1): 826–833.
67. Chabal, Emile. 2013. “The Rise of the Anglo-Saxon: French Perceptions of the Anglo-American World in the Long Twentieth Century.” _French Politics, Culture & Society_ 31 (1): 24–46.
68. Chartier, Roger. 2004. “Languages, Books, and Reading from the Printed Word to the Digital Text.” _Critical Inquiry_ 31 (1): 133–152.
69. Chen, Ching-chih. 2005. “Digital Libraries and Universal Access in the 21st Century: Realities and Potential for US-China Collaboration.” In _Proceedings of the 3rd China-US Library Conference, Shanghai, China, March 22–25_ , 138–167. Beijing: National Library of China.
70. Chrisafis, Angelique. 2008. “Dante to Dialects: EU’s Online Renaissance.” _Guardian_ , November 21. .
71. Chun, Wendy H. K. 2006. _Control and Freedom: Power and Paranoia in the Age of Fiber Optics_. Cambridge, MA: MIT Press.
72. Chun, Wendy Hui Kyong. 2008. “The Enduring Ephemeral, or the Future Is a Memory.” _Critical Inquiry_ 35 (1): 148–171.
73. Chun, Wendy H. K. 2017. _Updating to Remain the Same_. Cambridge, MA: MIT Press.
74. Clarke, Michael Tavel. 2009. _These Days of Large Things: The Culture of Size in America, 1865–1930_. Ann Arbor: University of Michigan Press.
75. Cohen, Jerome Bernard. 2006. _The Triumph of Numbers: How Counting Shaped Modern Life_. New York: W.W. Norton.
76. Conway, Paul. 2010. “Preservation in the Age of Google: Digitization, Digital Preservation, and Dilemmas.” _The Library Quarterly: Information, Community, Policy_ 80 (1): 61–79.
77. Courant, Paul N. 2006. “Scholarship and Academic Libraries (and Their Kin) in the World of Google.” _First Monday_ 11 (8).
78. Coyle, Karen. 2006. “Mass Digitization of Books.” _Journal of Academic Librarianship_ 32 (6): 641–645.
79. Darnton, Robert. 2009. _The Case for Books: Past, Present, and Future_. New York: Public Affairs.
80. Daston, Lorraine. 2012. “The Sciences of the Archive.” _Osiris_ 27 (1): 156–187.
81. Davison, Graeme. 2009. “Speed-Relating: Family History in a Digital Age.” _History Australia_ 6 (2). .
82. Deegan, Marilyn, and Kathryn Sutherland. 2009. _Transferred Illusions: Digital Technology and the Forms of Print_. Farnham, UK: Ashgate.
83. de la Durantaye, Katharine. 2011. “H Is for Harmonization: The Google Book Search Settlement and Orphan Works Legislation in the European Union.” _New York Law School Law Review_ 55 (1): 157–174.
84. DeLanda, Manuel. 2006. _A New Philosophy of Society: Assemblage Theory and Social Complexity_. London: Continuum.
85. Deleuze, Gilles. 1997. “Postscript on Control Societies.” In _Negotiations 1972–1990_ , 177–182. New York: Columbia University Press.
86. Deleuze, Gilles. 2013. _Difference and Repetition_. London: Bloomsbury Academic.
87. Deleuze, Gilles, and Félix Guattari. 1987. _A Thousand Plateaus: Capitalism and Schizophrenia_. Minneapolis: University of Minnesota Press.
88. DeNardis, Laura. 2011. _Opening Standards: The Global Politics of Interoperability_. Cambridge, MA: MIT Press.
89. DeNardis, Laura. 2014. “The Social Media Challenge to Internet Governance.” In _Society and the Internet: How Networks of Information and Communication Are Changing Our Lives_ , eds. Mark Graham and William H. Dutton. Oxford: Oxford University Press.
90. Derrida, Jacques. 1996. _Archive Fever: A Freudian Impression_. Chicago: University of Chicago Press.
91. Derrida, Jacques. 2005. _Paper Machine_. Stanford, CA: Stanford University Press.
92. Dewey, Melvin. 1926. “Our Next Half-Century.” _Bulletin of the American Library Association_ 20 (10): 309–312.
93. Dinshaw, Carolyn. 2012. _How Soon Is Now?: Medieval Texts, Amateur Readers, and the Queerness of Time_. Durham, NC: Duke University Press.
94. Doob, Penelope Reed. 1994. _The Idea of the Labyrinth: From Classical Antiquity Through the Middle Ages_. Ithaca, NY: Cornell University Press.
95. Dörk, Marian, Sheelagh Carpendale, and Carey Williamson. 2011. “The Information flaneur: A Fresh Look at Information Seeking.” _Conference on Human Factors in Computing Systems—Proceedings_ , 1215–1224.
96. Doward, Jamie. 2009. “Angela Merkel Attacks Google’s Plans to Create a Global Online Library.” _Guardian_ , October 11. .
97. Duguid, Paul. 2007. “Inheritance and Loss? A Brief Survey of Google Books.” _First Monday_ 12 (8). .
98. Earnshaw, Rae A., and John Vince. 2007. _Digital Convergence: Libraries of the Future_. London: Springer.
99. Easley, David, and Jon Kleinberg. 2010. _Networks, Crowds, and Markets: Reasoning About a Highly Connected World_. New York: Cambridge University Press.
100. Easterling, Keller. 2014. _Extrastatecraft: The Power of Infrastructure Space_. Verso.
101. Eckstein, Lars, and Anja Schwarz. 2014. _Postcolonial Piracy: Media Distribution and Cultural Production in the Global South_. London: Bloomsbury.
102. Eco, Umberto. 2014. _The Name of the Rose_. Boston: Mariner Books.
103. Edwards, Paul N. 2003. “Infrastructure and Modernity: Force, Time and Social Organization in the History of Sociotechnical Systems.” In _Modernity and Technology_ , eds. Thomas J. Misa, Philip Brey, and Andrew Feenberg. Cambridge, MA: MIT Press.
104. Edwards, Paul N., Steven J. Jackson, Melissa K. Chalmers, Geoffrey C. Bowker, Christine L. Borgman, David Ribes, Matt Burton, and Scout Calvert. 2012. _Knowledge Infrastructures: Intellectual Frameworks and Research Challenges_. Report of a workshop sponsored by the National Science Foundation and the Sloan Foundation University of Michigan School of Information, May 25–28. .
105. Ensmenger, Nathan. 2012. _The Computer Boys Take Over: Computers, Programmers, and the Politics of Technical Expertise_. Cambridge, MA: MIT Press.
106. Eyal, Nir. 2014. _Hooked: How to Build Habit-Forming Products_. Princeton, NJ: Princeton University Press.
107. Featherstone, Mike. 1998. “The flaneur, the City and Virtual Public Life.” _Urban Studies (Edinburgh, Scotland)_ 35 (5–6): 909–925.
108. Featherstone, Mike. 2000. “Archiving Cultures.” _British Journal of Sociology_ 51 (1): 161–184.
109. Fiske, John. 1987. _Television Culture_. London: Methuen.
110. Fjeld, Per Olaf, and Sverre Fehn. 2009. _Sverre Fehn: The Pattern of Thoughts_. New York: Monacelli Press.
111. Flyverbom, Mikkel, Paul M. Leonardi, Cynthia Stohl, and Michael Stohl. 2016. “The Management of Visibilities in the Digital Age.” _International Journal of Communication_ 10 (1): 98–109.
112. Foucault, Michel. 2002. _Archaeology of Knowledge_. London: Routledge.
113. Foucault, Michel. 2004. _Death and the Labyrinth: The World of Raymond Roussel_. Continuum International Publishing Group Ltd.
114. Foucault, Michel. 2009. _Security, Territory, Population: Lectures at the College de France, 1977–1978_. Basingstoke, UK: Palgrave Macmillan.
115. Fredriksson, Martin, and James Arvanitakis. 2014. _Piracy: Leakages from Modernity_. Sacramento, CA: Litwin Books.
116. Freedgood, Elaine. 2013. “Divination.” _PMLA_ 128 (1): 221–225.
117. Fuchs, Christian. 2014. _Digital Labour and Karl Marx_. New York: Routledge.
118. Fuller, Matthew, and Andrew Goffey. 2012. _Evil Media_. Cambridge, MA: MIT Press.
119. Galloway, Alexander R. 2013a. _The Interface Effect_. Cambridge: Polity Press.
120. Galloway Alexander, R. 2013b. “The Poverty of Philosophy: Realism and Post-Fordism.” _Critical Inquiry_ 39 (2): 347–366.
121. Gardner, Carolyn Caffrey, and Gabriel J. Gardner. 2017. “Fast and Furious (at Publishers): The Motivations behind Crowdsourced Research Sharing.” _College & Research Libraries_ 78 (2): 131–149.
122. Garrett, Jeffrey. 1999. “Redefining Order in the German Library, 1775–1825.” _Eighteenth-Century Studies_ 33 (1): 103–123.
123. Gibbon, Peter, and Lasse F. Henriksen. 2012. “A Standard Fit for Neoliberalism.” _Comparative Studies in Society and History_ 54 (2): 275–307.
124. Giesler, Markus. 2006. “Consumer Gift Systems.” _Journal of Consumer Research_ 33 (2): 283–290.
125. Gießmann, Sebastian. 2015. _Medien Der Kooperation_. Siegen, Germany: Universitet Verlag.
126. Gillespie, Tarleton. 2010. “The Politics of ‘Platforms.’” _New Media & Society_ 12 (3): 347–364.
127. Gillespie, Tarleton. 2017. “Algorithmically Recognizable: Santorum’s Google Problem, and Google’s Santorum Problem.” _Information Communication and Society_ 20 (1): 63–80.
128. Gladwell, Malcolm. 2000. _The Tipping Point: How Little Things Can Make a Big Difference_. Boston: Little, Brown.
129. Goldate, Steven. 1996. “The Cyberflaneur: Spaces and Places on the Internet.” _Art Monthly Australia_ 91:15–18.
130. Goldsmith, Jack L., and Tim Wu. 2006. _Who Controls the Internet?: Illusions of a Borderless World_. New York: Oxford University Press.
131. Goldsmith, Kenneth. 2007. “UbuWeb Wants to Be Free.” Last modified July 18, 2007. .
132. Golumbia, David. 2009. _The Cultural Logic of Computation_. Cambridge, MA: Harvard University Press.
133. Goriunova, Olga. 2012. _Art Platforms and Cultural Production on the Internet_. New York: Routledge.
134. Gradmann, Stephan. 2009. “Interoperability: A Key Concept for Large Scale, Persistent Digital Libraries.” 1st DL.org Workshop at 13th European Conference on Digital Libraries (ECDL).
135. Greene, Mark. 2010. “MPLP: It’s Not Just for Processing Anymore.” _American Archivist_ 73 (1): 175–203.
136. Grewal, David S. 2008. _Network Power: The Social Dynamics of Globalization_. New Haven, CT: Yale University Press.
137. Hacking, Ian. 1995. _Rewriting the Soul: Multiple Personality and the Sciences of Memory_. Princeton, NJ: Princeton University Press.
138. Hacking, Ian. 2010. _The Taming of Chance_. Cambridge: Cambridge University Press.
139. Hagel, John. 2012. _The Power of Pull: How Small Moves, Smartly Made, Can Set Big Things in Motion_. New York: Basic Books.
140. Haggerty, Kevin D, and Richard V. Ericson. 2000. “The Surveillant Assemblage.” _British Journal of Sociology_ 51 (4): 605–622.
141. Hall, Gary. 2008. _Digitize This Book!: The Politics of New Media, or Why We Need Open Access Now_. Minneapolis: University of Minnesota Press.
142. Hall, Mark, et al. 2012. “PATHS—Exploring Digital Cultural Heritage Spaces.” In _Theory and Practice of Digital Libraries. TPDL 2012_ , vol. 7489, 500–503. Lecture Notes in Computer Science. Berlin: Springer.
143. Hall, Stuart, and Fredric Jameson. 1990. “Clinging to the Wreckage: a Conversation.” _Marxism Today_ (September): 28–31.
144. Hardt, Michael, and Antonio Negri. 2007. _Empire_. Cambridge, MA: Harvard University Press.
145. Hardt, Michael, and Antonio Negri. 2009. _Commonwealth_. Cambridge, MA: Harvard University Press.
146. Hartmann, Maren. 1999. “The Unknown Artificial Metaphor or: The Difficult Process of Creation or Destruction.” In _Next Cyberfeminist International_ , ed. Cornelia Sollfrank. Hamburg, Germany: obn. .
147. Hartmann, Maren. 2004. _Technologies and Utopias: The Cyberflaneur and the Experience of “Being Online.”_ Munich: Fischer.
148. Hayles, N. Katherine. 1993. “Seductions of Cyberspace.” In _Lost in Cyberspace: Essays and Far-Fetched Tales_ , ed. Val Schaffner. Bridgehampton, NY: Bridge Works Pub. Co.
149. Hayles, N. Katherine. 2005. _My Mother Was a Computer: Digital Subjects and Literary Texts_. Chicago: University of Chicago Press.
150. Helmond, Anne. 2015. “The Platformization of the Web: Making Web Data Platform Ready.” _Social Media + Society_ 1 (2). .
151. Hicks, Marie. 2018. _Programmed Inequality: How Britain Discarded Women Technologists and Lost its Edge in Computing_. Cambridge, MA: MIT Press.
152. Higgins, Vaughan, and Wendy Larner. 2010. _Calculating the Social: Standards and the Reconfiguration of Governing_. Basingstoke, UK: Palgrave Macmillan.
153. Holzer, Boris, and P. S. Mads. 2003. “Rethinking Subpolitics: Beyond the ‘Iron Cage’ of Modern Politics?” _Theory, Culture & Society_ 20 (2): 79–102.
154. Huyssen, Andreas. 2015. _Miniature Metropolis: Literature in an Age of Photography and Film_. Cambridge, MA: Harvard University Press.
155. Imerito, Tom. 2009. “Electrifying Knowledge.” _Pittsburgh Quarterly Magazine_. Summer. .
156. Janssen, Olaf. D. 2011. “Digitizing All Dutch Books, Newspapers and Magazines—730 Million Pages in 20 Years—Storing It, and Getting It Out There.” In _Research and Advanced Technology for Digital Libraries_ , eds. S. Gradmann, F. Borri, C. Meghini, and H. Schuldt, 473–476. TPDL 2011. Lecture Notes in Computer Science, vol. 6966. Berlin: Springer.
157. Jasanoff, Sheila. 2013. “Epistemic Subsidiarity—Coexistence, Cosmopolitanism, Constitutionalism.” _European Journal of Risk Regulation_ 4 (2) 133–141.
158. Jeanneney, Jean N. 2007. _Google and the Myth of Universal Knowledge: A View from Europe_. Chicago: University of Chicago Press.
159. Jones, Elisabeth A., and Joseph W. Janes. 2010. “Anonymity in a World of Digital Books: Google Books, Privacy, and the Freedom to Read.” _Policy & Internet_ 2 (4): 43–75.
160. Jøsevold, Roger. 2016. “A National Library for the 21st Century—Knowledge and Cultural Heritage Online.” _Alexandria_ _:_ _The_ _Journal of National and International Library and Information Issues_ 26 (1): 5–14.
161. Kang, Minsoo. 2011. _Sublime Dreams of Living Machines: The Automaton in the European Imagination_. Cambridge, MA: Harvard University Press.
162. Karaganis, Joe. 2011. _Media Piracy in Emerging Economies_. New York: Social Science Research Council.
163. Karaganis, Joe. 2018. _Shadow Libraries: Access to Educational Materials in Global Higher Education_. Cambridge, MA: MIT Press.
164. Kaufman, Peter B., and Jeff Ubois. 2007. “Good Terms—Improving Commercial-Noncommercial Partnerships for Mass Digitization.” _D-Lib Magazine_ 13 (11–12). .
165. Kelley, Robin D. G. 1994. _Race Rebels: Culture, Politics, and the Black Working Class_. New York: Free Press.
166. Kelly, Kevin. 1994. _Out of Control: The Rise of Neo-Biological Civilization_. Reading, MA: Addison-Wesley.
167. Kenney, Anne R, Nancy Y. McGovern, Ida T. Martinez, and Lance J. Heidig. 2003. “Google Meets Ebay: What Academic Librarians Can Learn from Alternative Information Providers." D-lib Magazine, 9 (6) .
168. Kiriya, Ilya. 2012. “The Culture of Subversion and Russian Media Landscape.” _International Journal of Communication_ 6 (1): 446–466.
169. Koevoets, Sanne. 2013. _Into the Labyrinth of Knowledge and Power: The Library as a Gendered Space in the Western Imaginary_. Utrecht, the Netherlands: Utrecht University.
170. Kolko, Joyce. 1988. _Restructuring the World Economy_. New York: Pantheon Books.
171. Komaromi, Ann. 2012. “Samizdat and Soviet Dissident Publics.” _Slavic Review_ 71 (1): 70–90.
172. Kramer, Bianca. 2016a. “Sci-Hub: Access or Convenience? A Utrecht Case Study, Part 1.” _I &M / I&O 2.0_, June 20. .
173. Kramer, Bianca. 2016b. “Sci-Hub: Access or Convenience? A Utrecht Case Study, Part 2.” .
174. Krysa, Joasia. 2006. _Curating Immateriality: The Work of the Curator in the Age of Network Systems_. Brooklyn, NY: Autonomedia.
175. Kurgan, Laura. 2013. _Close up at a Distance: Mapping, Technology, and Politics_. Brooklyn, NY: Zone Books.
176. Labi, Aisha. 2005. “France Plans to Digitize Its ‘Cultural Patrimony’ and Defy Google’s ‘Domination.’” _Chronicle of Higher Education_ (March): 21.
177. Larkin, Brian. 2008. _Signal and Noise: Media, Infrastructure, and Urban Culture in Nigeria_. Durham, NY: Duke University Press.
178. Latour, Bruno. 2005. _Reassembling the Social: An Introduction to Actor-Network Theory_. Oxford: Oxford University Press.
179. Latour, Bruno. 2007. “Beware, Your Imagination Leaves Digital Traces.” _Times Higher Literary Supplement_ , April 6.
180. Latour, Bruno. 2008. _What Is the Style of Matters of Concern?: Two Lectures in Empirical Philosophy_. Assen, the Netherlands: Koninklijke Van Gorcum.
181. Lavoie, Brian F., and Lorcan Dempsey. 2004. “Thirteen Ways of Looking at Digital Preservation.” _D-Lib Magazine_ 10 (July/August). .
182. Leetaru, Kalev. 2008. “Mass Book Digitization: The Deeper Story of Google Books and the Open Content Alliance.” _First Monday_ 13 (10). .
183. Lefebvre, Henri. 2009. _The Production of Space_. Malden, MA: Blackwell.
184. Lefler, Rebecca. 2007. “‘Europeana’ Ready for Maiden Voyage.” _Hollywood Reporter_ , March 23. .
185. Lessig, Lawrence. 2005a. “Lawrence Lessig on Interoperability.” _Creative Commons_ , October 19. .
186. Lessig, Lawrence. 2005b. _Free Culture: The Nature and Future of Creativity_. New York: Penguin Books.
187. Lessig, Lawrence. 2010. “For the Love of Culture—Will All of Our Literary Heritage Be Available to Us in the Future? Google, Copyright, and the Fate of American Books. _New Republic_ 24\. .
188. Levy, Steven. 2011. _In the Plex: How Google Thinks, Works, and Shapes Our Lives_. New York: Simon & Schuster.
189. Lewis, Jane. 1987. _Labour and Love: Women’s Experience of Home and Family, 1850–1940_. Oxford: Blackwell.
190. Liang, Lawrence. 2009. “Piracy, Creativity and Infrastructure: Rethinking Access to Culture,” July 20.
191. Liu, Jean. 2013. “Interactions: The Numbers Behind #ICanHazPDF.” _Altmetric_ , May 9. .
192. Locke, John. 2003. _Two Treatises of Government: And a Letter Concerning Toleration_. New Haven, CT: Yale University Press.
193. Martin, Andrew, and George Ross. 2004. _Euros and Europeans: Monetary Integration and the European Model of Society_. New York: Cambridge University Press.
194. Mbembe, Achille. 2002. “The Power of the Archive and its Limits.” In _Refiguring the Archive_ , ed. Carolyn Hamilton. Cape Town, South Africa: David Philip.
195. McDonough, Jerome. 2009. “XML, Interoperability and the Social Construction of Markup Languages: The Library Example.” _Digital Humanities Quarterly_ 3 (3). .
196. McPherson, Tara. 2012. “U.S. Operating Systems at Mid-Century: The Intertwining of Race and UNIX.” In _Race After the Internet_ , eds. Lisa Nakamura and Peter Chow-White. New York: Routledge.
197. Meckler, Alan M. 1982. _Micropublishing: A History of Scholarly Micropublishing in America, 1938–1980_. Westport, CT: Greenwood Press.
198. Medak, Tomislav, et al. 2016. _The Radiated Book_. .
199. Merton, Robert K., and Elinor Barber. 2004. _The Travels and Adventures of Serendipity: A Study in Sociological Semantics and the Sociology of Science_. Princeton, NJ: Princeton University Press.
200. Meunier, Sophie. 2003. “France’s Double-Talk on Globalization.” _French Politics, Culture & Society_ 21:20–34.
201. Meunier, Sophie. 2007. “The Distinctiveness of French Anti-Americanism.” In _Anti-Americanisms in World Politics_ , eds. Peter J. Katzenstein and Robert O. Keohane. Ithaca, NY: Cornell University Press.
202. Michel, Jean-Baptiste, et al. 2011. “Quantitative Analysis of Culture Using Millions of Digitized Books.” _Science_ 331 (6014):176–182.
203. Midbon, Mark. 1980. “Capitalism, Liberty, and the Development of the Development of the Library.” _Journal of Library History (Tallahassee, Fla.)_ 15 (2): 188–198.
204. Miksa, Francis L. 1983. _Melvil Dewey: The Man and the Classification_. Albany, NY: Forest Press.
205. Mitropoulos, Angela. 2012. _Contract and Contagion: From Biopolitics to Oikonomia_. Brooklyn, NY: Minor Compositions.
206. Mjør, Kåre Johan. 2009. “The Online Library and the Classic Literary Canon in Post-Soviet Russia: Some Observations on ‘The Fundamental Electronic Library of Russian Literature and Folklore.’” _Digital Icons: Studies in Russian, Eurasian and Central European New Media_ 1 (2): 83–99.
207. Montagnani, Maria Lillà, and Maurizio Borghi. 2008. “Promises and Pitfalls of the European Copyright Law Harmonisation Process.” In _The European Union and the Culture Industries: Regulation and the Public Interest_ , ed. David Ward. Aldershot, UK: Ashgate.
208. Murrell, Mary. 2017. “Unpacking Google’s Library.” _Limn_ (6). .
209. Nakamura, Lisa. 2002. _Cybertypes: Race, Ethnicity, and Identity on the Internet_. New York: Routledge.
210. Nakamura, Lisa. 2013. “‘Words with Friends’: Socially Networked Reading on Goodreads.” _PMLA_ 128 (1): 238–243.
211. Nava, Mica, and Alan O’Shea. 1996. _Modern Times: Reflections on a Century of English Modernity_ , 38–76. London: Routledge.
212. Negroponte, Nicholas. 1995. _Being Digital_. New York: Knopf.
213. Neubert, Michael. 2008. “Google’s Mass Digitization of Russian-Language Books.” _Slavic & East European Information Resources_ 9 (1): 53–62.
214. Nicholson, William. 1819. “Platform.” In _British Encyclopedia: Or, Dictionary of Arts and Sciences, Comprising an Accurate and Popular View of the Present Improved State of Human Knowledge_. Philadelphia: Mitchell, Ames, and White.
215. Niggemann, Elisabeth. 2011. _The New Renaissance: Report of the “Comité Des Sages.”_ Brussels: Comité des Sages.
216. Noble, Safiya Umoja, and Brendesha M. Tynes. 2016. _The Intersectional Internet: Race, Sex, Class and Culture Online_. New York: Peter Lang Publishing.
217. Nord, Deborah Epstein. 1995. _Walking the Victorian Streets: Women, Representation, and the City_. Ithaca, NY: Cornell University Press.
218. Norvig, Peter. 2012. “Colorless Green Ideas Learn Furiously: Chomsky and the Two Cultures of Statistical Learning.” _Significance_ (August): 30–33.
219. O’Neill, Paul, and Søren Andreasen. 2011. _Curating Subjects_. London: Open Editions.
220. O’Neill, Paul, and Mick Wilson. 2010. _Curating and the Educational Turn_. London: Open Editions.
221. Ong, Aihwa, and Stephen J. Collier. 2005. _Global Assemblages: Technology, Politics, and Ethics As Anthropological Problems_. Malden, MA: Blackwell Pub.
222. Otlet, Paul, and W. Boyd Rayward. 1990. _International Organisation and Dissemination of Knowledge_. Amsterdam: Elsevier.
223. Palfrey, John G. 2015. _Bibliotech: Why Libraries Matter More Than Ever in the Age of Google_. New York: Basic Books.
224. Palfrey, John G., and Urs Gasser. 2012. _Interop: The Promise and Perils of Highly Interconnected Systems_. New York: Basic Books.
225. Parisi, Luciana. 2004. _Abstract Sex: Philosophy, Bio-Technology and the Mutations of Desire_. London: Continuum.
226. Patra, Nihar K., Bharat Kumar, and Ashis K. Pani. 2014. _Progressive Trends in Electronic Resource Management in Libraries_. Hershey, PA: Information Science Reference.
227. Paulheim, Heiko. 2015. “What the Adoption of Schema.org Tells About Linked Open Data.” _CEUR Workshop Proceedings_ 1362:85–90.
228. Peatling, G. K. 2004. “Public Libraries and National Identity in Britain, 1850–1919.” _Library History_ 20 (1): 33–47.
229. Pechenick, Eitan A., Christopher M. Danforth, Peter S. Dodds, and Alain Barrat. 2015. “Characterizing the Google Books Corpus: Strong Limits to Inferences of Socio-Cultural and Linguistic Evolution.” _PLoS One_ 10 (10).
230. Peters, John Durham. 2015. _The Marvelous Clouds: Toward a Philosophy of Elemental Media_. Chicago: University of Chicago Press.
231. Pfanner, Eric. 2011. “Quietly, Google Puts History Online.” _New York Times_ , November 20.
232. Pfanner, Eric. 2012. “Google to Announce Venture With Belgian Museum.” _New York Times_ , March 12. .
233. Philip, Kavita. 2005. “What Is a Technological Author? The Pirate Function and Intellectual Property.” _Postcolonial Studies: Culture, Politics, Economy_ 8 (2): 199–218.
234. Pine, Joseph B., and James H. Gilmore. 2011. _The Experience Economy_. Boston: Harvard Business Press.
235. Ping-Huang, Marianne. 2016. “Archival Biases and Cross-Sharing.” _NTIK_ 5 (1): 55–56.
236. Pollock, Griselda. 1998. “Modernity and the Spaces of Femininity.” In _Vision and Difference: Femininity, Feminism and Histories of Art_ , ed. Griselda Pollock, 245–256. London: Routledge & Kegan Paul.
237. Ponte, Stefano, Peter Gibbon, and Jakob Vestergaard. 2011. _Governing Through Standards: Origins, Drivers and Limitations_. Basingstoke, UK: Palgrave Macmillan.
238. Pörksen, Uwe. 1995. _Plastic Words: The Tyranny of a Modular Language_. University Park: Pennsylvania State University Press.
239. Proctor, Nancy. 2013. “Crowdsourcing—an Introduction: From Public Goods to Public Good.” _Curator_ 56 (1): 105–106.
240. Puar, Jasbir K. 2007. _Terrorist Assemblages: Homonationalism in Queer Times_. Durham, NC: Duke University Press.
241. Purdon, James. 2016. _Modernist Informatics: Literature, Information, and the State_. New York: Oxford University Press.
242. Putnam, Robert D. 1988. “Diplomacy and Domestic Politics: The Logic of Two-Level Games.” _International Organization_ 42 (3): 427–460.
243. Rabinow, Paul. 2003. _Anthropos Today: Reflections on Modern Equipment_. Princeton, NJ: Princeton University Press.
244. Rabinow, Paul, and Michel Foucault. 2011. _The Accompaniment: Assembling the Contemporary_. Chicago: University of Chicago Press.
245. Raddick, M., et al. 2009. “Galaxy Zoo: Exploring the Motivations of Citizen Science Volunteers.” _Astronomy Education Review_ 9 (1).
246. Ratto, Matt, and Boler Megan. 2014. _DIY Citizenship: Critical Making and Social Media_. Cambridge, MA: MIT Press.
247. Reichardt, Jasia. 1969. _Cybernetic Serendipity: The Computer and the Arts_. New York: Frederick A Praeger. .
248. Ridge, Mia. 2013. “From Tagging to Theorizing: Deepening Engagement with Cultural Heritage through Crowdsourcing.” _Curator_ 56 (4): 435–450.
249. Rieger, Oya Y. 2008. _Preservation in the Age of Large-Scale Digitization: A White Paper_. Washington, DC: Council on Library and Information Resources.
250. Rodekamp, Volker, and Bernhard Graf. 2012. _Museen zwischen Qualität und Relevanz: Denkschrift zur Lage der Museen_. Berlin: G+H Verlag.
251. Rogers, Richard. 2012. “Mapping and the Politics of Web Space.” _Theory, Culture & Society_ 29:193–219.
252. Romeo, Fiona, and Lucinda Blaser. 2011. “Bringing Citizen Scientists and Historians Together.” Museums and the Web. .
253. Russell, Andrew L. 2014. _Open Standards and the Digital Age: History, Ideology, and Networks_. New York: Cambridge University Press.
254. Said, Edward. 1983. “Traveling Theory.” In _The World, the Text, and the Critic_ , 226–247. Cambridge, MA: Harvard University Press.
255. Samimian-Darash, Limor, and Paul Rabinow. 2015. _Modes of Uncertainty: Anthropological Cases_. Chicago: The University of Chicago Press.
256. Samuel, Henry. 2009. “Nicolas Sarkozy Fights Google over Classic Books.” _The Telegraph_ , December 14. .
257. Samuelson, Pamela. 2010. “Google Book Search and the Future of Books in Cyberspace.” _Minnesota Law Review_ 94 (5): 1308–1374.
258. Samuelson, Pamela. 2011. “Why the Google Book Settlement Failed—and What Comes Next?” _Communications of the ACM_ 54 (11): 29–31.
259. Samuelson, Pamela. 2014. “Mass Digitization as Fair Use.” _Communications of the ACM_ 57 (3): 20–22.
260. Samyn, Jeanette. 2012. “Anti-Anti-Parasitism.” _The New Inquiry_ , September 18.
261. Sanderhoff, Merethe. 2014. _Sharing Is Caring: Åbenhed Og Deling I Kulturarvssektoren_. Copenhagen: Statens Museum for Kunst.
262. Sassen, Saskia. 2008. _Territory, Authority, Rights: From Medieval to Global Assemblages_. Princeton, NJ: Princeton University Press.
263. Schmidt, Henrike. 2009. “‘Holy Cow’ and ‘Eternal Flame’: Russian Online Libraries.” _Kultura_ 1, 4–8. .
264. Schmitz, Dawn. 2008. _The Seamless Cyberinfrastructure: The Challenges of Studying Users of Mass Digitization and Institutional Repositories_. Washington, DC: Digital Library Federation, Council on Library and Information Resources.
265. Schonfeld, Roger, and Liam Sweeney. 2017. “Inclusion, Diversity, and Equity: Members of the Association of Research Libraries.” _Ithaka S+R_ , August 30. .
266. Schüll, Natasha Dow. 2014. _Addiction by Design: Machine Gambling in Las Vegas_. Princeton, NJ: Princeton University Press.
267. Scott, James C. 2009. _Domination and the Arts of Resistance: Hidden Transcripts_. New Haven, CT: Yale University Press.
268. Seddon, Nicholas. 2013. _Government Contracts: Federal, State and Local_. Annandale, Australia: The Federation Press.
269. Serres, Michel. 2013. _The Parasite_. Minneapolis: University of Minnesota Press.
270. Sherratt, Tim. 2013. “From Portals to Platforms: Building New Frameworks for User Engagement.” National Library of Australia, November 5. .
271. Shukaitis, Stevphen. 2009. “Infrapolitics and the Nomadic Educational Machine.” In _Contemporary Anarchist Studies: An Introductory Anthology of Anarchy in the Academy_ , ed. Randall Amster. London: Routledge.
272. Smalls, James. 2003. “‘Race’ As Spectacle in Late-Nineteenth-Century French Art and Popular Culture.” _French Historical Studies_ 26 (2): 351–382.
273. Snyder, Francis. 2002. “Governing Economic Globalisation: Global Legal Pluralism and EU Law.” In _Regional and Global Regulation of International Trade_ , 1–47. Oxford: Hart Publishing.
274. Solá-Morales, Rubió I. 1999. _Differences: Topographies of Contemporary Architecture_. Cambridge, MA: MIT Press.
275. Sollfrank, Cornelia. 2015. “Nothing New Needs to Be Created. Kenneth Goldsmith’s Claim to Uncreativity.” In _No Internet—No Art. A Lunch Byte Anthology_ , ed. Melanie Bühler. Eindhoven: Onomatopee. .
276. Somers, Margaret R. 2008. _Genealogies of Citizenship: Markets, Statelessness, and the Right to Have Rights_. Cambridge: Cambridge University Press.
277. Sparks, Peter G. 1992. _A Roundtable on Mass Deacidification._ Report on a Meeting Held September 12–13, 1991, in Andover, Massachusetts. Washington, DC: Association of Research Libraries.
278. Spivak, Gayatri C. 2000. “Megacity.” _Grey Room_ 1 (1): 8–25.
279. Srnicek, Nick. 2017. _Platform Capitalism_. Cambridge: Polity Press.
280. Stanley, Amy D. 1998. _From Bondage to Contract: Wage Labor, Marriage, and the Market in the Age of Slave Emancipation_. Cambridge: Cambridge University Press.
281. Stelmakh, Valeriya D. 2008. “Book Saturation and Book Starvation: The Difficult Road to a Modern Library System.” _Kultura_ , September 4.
282. Stiegler, Bernard. n.d. “Amateur.” Ars Industrialis: Association internationale pour une politique industrielle des technologies de l’esprit. .
283. Star, Susan Leigh. 1999. “The Ethnography of Infrastructure.” _American Behavioral Scientist_ 43 (3): 377–391.
284. Steyerl, Hito. 2012. “Defense of the Poor Image.” In _The Wretched of the Screen_. Berlin, Germany: Sternberg Presss.
285. Stiegler, Bernard. 2003. _Aimer, s’aimer, nous aimer_. Paris: Éditions Galilée.
286. Suchman, Mark C. 2003. “The Contract as Social Artifact.” _Law & Society Review_ 37 (1): 91–142.
287. Sumner, William G. 1952. _What Social Classes Owe to Each Other_. Caldwell, ID: Caxton Printers.
288. Tate, Jay. 2001. “National Varieties of Standardization.” In _Varieties of Capitalism: The Institutional Foundations of Comparative Advantage_ , ed. Peter A. Hall and David Soskice. Oxford: Oxford University Press.
289. Tawa, Michael. 2012. “Limits of Fluxion.” In _Architecture in the Space of Flows_ , eds. Andrew Ballantyne and Chris Smith. Abingdon, UK: Routledge.
290. Tay, J. S. W., and R. H. Parker. 1990. “Measuring International Harmonization and Standardization.” _Abacus_ 26 (1): 71–88.
291. Tenen, Dennis, and Maxwell Henry Foxman. 2014. “ _Book Piracy as Peer Preservation_.” Columbia University Academic Commons. doi: 10.7916/D8W66JHS.
292. Teubner, Gunther. 1997. _Global Law Without a State_. Aldershot, UK: Dartmouth.
293. Thussu, Daya K. 2007. _Media on the Move: Global Flow and Contra-Flow_. London: Routledge.
294. Tiffen, Belinda. 2007. “Recording the Nation: Nationalism and the History of the National Library of Australia.” _Australian Library Journal_ 56 (3): 342.
295. Tsilas, Nicos. 2011. “Open Innovation and Interoperability.” In _Opening Standards: The Global Politics of Interoperability_ , ed. Laura DeNardis. Cambridge, MA: MIT Press.
296. Tygstrup, Frederik. 2014. “The Politics of Symbolic Forms.” In _Cultural Ways of Worldmaking: Media and Narratives_ , ed. Ansgar Nünning, Vera Nünning, and Birgit Neumann. Berlin: De Gruyter.
297. Vaidhyanathan, Siva. 2011. _The Googlization of Everything: (and Why We Should Worry)_. Berkeley: University of California Press.
298. van Dijck, José. 2012. “Facebook as a Tool for Producing Sociality and Connectivity.” _Television & New Media_ 13 (2): 160–176.
299. Veel, Kristin. 2003. “The Irreducibility of Space: Labyrinths, Cities, Cyberspace.” _Diacritics_ 33:151–172.
300. Venn, Couze. 2006. “The Collection.” _Theory, Culture & Society_ 23:35–40.
301. Verhoeven, Deb. 2016. “As Luck Would Have It: Serendipity and Solace in Digital Research Infrastructure.” _Feminist Media Histories_ 2 (1): 7–28.
302. Vise, David A., and Mark Malseed. 2005. _The Google Story_. New York: Delacorte Press.
303. Voltaire. 1786. _Dictionaire Philosophique_ (Oeuvres Completes de Voltaire, Tome Trente-Huiteme). Gotha, Germany: Chez Charles Guillaume Ettinger, Librarie.
304. Vul, Vladimir Abramovich. 2003. “Who and Why? Bibliotechnoye Delo,” _Librarianship_ 2 (2). .
305. Walker, Kevin. 2006. “Story Structures: Building Narrative Trails in Museums.” In _Technology-Mediated Narrative Environments for Learning_ , eds. G. Dettori, T. Giannetti, A. Paiva, and A. Vaz, 103–114. Dordrecht: Sense Publishers.
306. Walker, Neil. 2003. _Sovereignty in Transition_. Oxford: Hart.
307. Weigel, Moira. 2016. _Labor of Love: The Invention of Dating_. New York: Farrar, Straus and Giroux.
308. Weiss, Andrew, and Ryan James. 2012. “Google Books’ Coverage of Hawai’i and Pacific Books.” _Proceedings of the American Society for Information Science and Technology_ 49 (1): 1–3.
309. Weizman, Eyal. 2006. “Lethal Theory.” _Log_ 7:53–77.
310. Wilson, Elizabeth. 1992. “The Invisible flaneur.” _New Left Review_ 191 (January–February): 90–110.
311. Wolf, Gary. 2003. “The Great Library of Amazonia.” _Wired_ , November.
312. Wolff, Janet. 1985. “The Invisible Flâneuse. Women and the Literature of Modernity.” _Theory, Culture & Society_ 2 (3): 37–46.
313. Yeo, Richard R. 2003. “A Solution to the Multitude of Books: Ephraim Chambers’s ‘Cyclopaedia’ (1728) as ‘the Best Book in the Universe.’” _Journal of the History of Ideas_ 64 (1): 61–72.
314. Young, Michael D. 1988. _The Metronomic Society: Natural Rhythms and Human Timetables_. Cambridge, MA: Harvard University Press.
315. Yurchak, Alexei. 1997. “The Cynical Reason of Late Socialism: Power, Pretense, and the Anekdot.” _Public Culture_ 9 (2): 161–188.
316. Yurchak, Alexei. 2006. _Everything Was Forever, Until It Was No More: The Last Soviet Generation_. Princeton, NJ: Princeton University Press.
317. Yurchak, Alexei. 2008. “Suspending the Political: Late Soviet Artistic Experiments on the Margins of the State.” _Poetics Today_ 29 (4): 713–733.
318. Žižek, Slavoj. 2009. _The Plague of Fantasies_. London: Verso.
319. Zuckerman, Ethan. 2008. “Serendipity, Echo Chambers, and the Front Page.” _Nieman Reports_ 62 (4). .

© 2018 Massachusetts Institute of Technology

All rights reserved. No part of this book may be reproduced in any form by any
electronic or mechanical means (including photocopying, recording, or
information storage and retrieval) without permission in writing from the
publisher.

This book was set in ITC Stone Sans Std and ITC Stone Serif Std by Toppan
Best-set Premedia Limited. Printed and bound in the United States of America.

Library of Congress Cataloging-in-Publication Data

Names: Thylstrup, Nanna Bonde, author.

Title: The politics of mass digitization / Nanna Bonde Thylstrup.

Description: Cambridge, MA : The MIT Press, [2018] | Includes bibliographical
references and index.

Identifiers: LCCN 2018010472 | ISBN 9780262039017 (hardcover : alk. paper)

eISBN 9780262350044

Subjects: LCSH: Library materials--Digitization. | Archival materials--
Digitization. | Copyright and digital preservation.

Classification: LCC Z701.3.D54 T49 2018 | DDC 025.8/4--dc23 LC record
available at

 

Display 200 300 400 500 600 700 800 900 1000 ALL characters around the word.